Dataset Viewer
text
stringlengths 5
10.5k
| source
stringlengths 33
146
|
---|---|
as a "solid little piece of '70s juvenile sci-fi" that is perhaps "not quite as 'scientifically accurate' as it pretends to be, but fun". Muir describes the special as a "time capsule of once-state-of-the-art science fiction". For Law, it is a "half-forgotten experiment" of interest only to dedicated Anderson fans, but also provides "an interesting look at what might have been" had Anderson gone on to make a full series. David Hirsch of Starlog magazine suggests that the lack of interest from TV networks in funding a series may have been due to the fact that the special's first appearance preceded the release of Star Wars (1977), which triggered a renewal of the science fiction genre. Richard Houldsworth of TV Zone attributes the special's failure as a TV pilot to ITC's decision to order a second series of Space: 1999: "As Anderson concentrated on making 1999 'bigger, better and more exciting than ever', Into Infinity just got swallowed up in its own black hole, and stayed there." == Other media == Douglas R. Mason, an author of several Space: 1999 novels, wrote a novelisation of the special for Futurama Publications. As The Day After Tomorrow was conceived as a pilot, Futurama intended Mason's book to be the first in a series. However, when it became apparent that no further episodes would be made, Futurama cancelled the novelisation, which to date remains unpublished. In 2017, a new novelisation by Gregory L. Norris was published by Anderson Entertainment, followed by a sequel in 2019. The special has been rated U by the British Board of Film Classification since 1997. In 2002, a DVD of The Day After Tomorrow and Star Laws, Anderson's 1986 pilot for a series that would later be made as Space Precinct, was released by Fanderson as part of | {"page_id": 5307518, "title": "The Day After Tomorrow (TV special)"} |
Aderemi Oluyomi Kuku (20 March 1941 – 13 February 2022) was a Nigerian mathematician and academic, known for his contributions to the fields of algebraic K-theory and non-commutative geometry. Born in Ijebu-Ode, Ogun State, Nigeria, Kuku began his academic journey at Makerere University College and the University of Ibadan, where he earned his B.Sc. in Mathematics, followed by his M.Sc. and Ph.D. under Joshua Leslie and Hyman Bass. His doctoral research focused on the Whitehead group of p-adic integral group-rings of finite p-groups. Kuku held positions as a lecturer and professor at various Nigerian universities, including the University of Ife and the University of Ibadan, where he served as Head of the Department of Mathematics and Dean of the Postgraduate School. His research involved developing methods for computing higher K-theory of non-commutative rings and articulating higher algebraic K-theory in the language of Mackey functors. His work on equivariant higher algebraic K-theory and its generalisations impacted the field. During his career, Kuku was elected a Fellow of the African Academy of Sciences, the Nigerian Academy of Science, and the American Mathematical Society. He also received the Nigerian National Order of Merit and the Officer of the Order of the Niger. He served as president of the African Mathematical Union, where he worked to promote mathematics across Africa. Kuku's work extended beyond research, encompassing education and mentorship. He authored several books and articles, supervised graduate students, and fostered international collaborations. == Early life and education == Aderemi Oluyomi Kuku was born on 20 March 1941, in Ijebu-Ode, Ogun State, Nigeria. His father Busari Adeoye Kuku was a photographer, and mother Abusatu Oriaran Baruwa was a trader. Aderemi was the third of four brothers, all of whom pursued professional careers. Kuku began his education at Bishop Oluwole Memorial School in Agege, Lagos State, | {"page_id": 70864519, "title": "Aderemi Kuku"} |
For a single pair, the minimum possible link bitrate is 192 kbit/s (3 x 64 kbit/s) and the maximum bitrate is 5.7 Mbit/s (89 x 64 kbit/s). On a 0.5 mm wire with 3 dB noise margin and no spectral limitations, the max bitrate can be achieved over distances of up to 1 kilometre (3,300 ft). At 6 kilometres (20,000 ft) the maximum achievable bitrate is about 850 kbit/s. The throughput of a 2BASE-TL link is lower than the link's bitrate by an average 5%, due to 64/65-octet encoding and PAF overhead; both factors depend on packet size. == 10PASS-TS == 10PASS-TS is an IEEE 802.3-2008 Physical Layer (PHY) specification for a full-duplex short-reach point-to-point Ethernet link over voice-grade copper wiring. 10PASS-TS PHYs deliver a minimum of 10 Mbit/s over distances of up to 750 metres (2,460 ft), using ITU-T G.993.1 (VDSL) technology over a single copper pair. These PHYs may also support an optional aggregation or bonding of multiple copper pairs, called PME Aggregation Function (PAF). === Details === Unlike other Ethernet physical layers that provide a single rate such as 10, 100, or 1000 Mbit/s, the 10PASS-TS link rate can vary, similar to 2BASE-TL, depending on the copper channel characteristics, such as length, wire diameter (gauge), wiring quality, the number of pairs if the link is aggregated and other factors. VDSL is a short range technology designed to provide broadband over distances less than 1 km of voice-grade copper twisted pair line, but connection data rates deteriorate quickly as the line distance increases. This has led to VDSL being referred to as a "fiber to the curb" technology, because it requires fiber backhaul to connect with a carrier network over greater distances. VDSL Ethernet in the first mile services using may be a useful way to standardise functionality | {"page_id": 5579181, "title": "Ethernet in the first mile"} |
of a modern airliner's avionics. An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. A primary function is in-flight management of the flight plan. Using various sensors (such as GPS and INS often backed up by radio navigation) to determine the aircraft's position, the FMS can guide the aircraft along the flight plan. From the cockpit, the FMS is normally controlled through a Control Display Unit (CDU) which incorporates a small screen and keyboard or touchscreen. The FMS sends the flight plan for display to the Electronic Flight Instrument System (EFIS), Navigation Display (ND), or Multifunction Display (MFD). The FMS can be summarised as being a dual system consisting of the Flight Management Computer (FMC), CDU and a cross talk bus. Floatstick – is a device to measure fuel levels in modern large aircraft. It consists of a closed tube rising from the bottom of the fuel tank. Surrounding the tube is a ring-shaped float, and inside it is a graduated rod indicating fuel capacity. The float and the top of the rod contain magnets. The rod is withdrawn from the bottom of the wing until the magnets stick, the distance it is withdrawn indicating the level of the fuel. When not in use, the stick is secured within the tube. Fluid – In physics, a fluid is a liquid, gas, or other material that continuously deforms (flows) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them. Fluid dynamics – In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes | {"page_id": 50775420, "title": "Glossary of aerospace engineering"} |
Ofeq-11, also known as Ofek 11 (Horizon in Hebrew), is part of the Ofeq family of reconnaissance satellites designed and built by Israel Aerospace Industries (IAI) for the Israeli Ministry of Defense. == Launch == Ofek-11 was launched on 13 September 2016, at 14:38 UTC from the Palmachim Airbase in Israel, two years after the launch of Ofeq-10. It was delivered using IAI's Shavit 2 launcher. Compared to its predecessor, the new satellite features an improved version of El-Op's "Jupiter High-Resolution Imaging System", with resolution increased to 0.5 meter, and uses a new satellite bus - OPSAT-3000 - which is a derivative of the satellite bus used in TecSAR-1. == Mission == According to reports, the launch initially looked like a success, but about 90 minutes later, engineers realized that while the satellite had entered orbit, not all systems were functioning or responding to instructions. However, after several days of remote repairs, the satellite was operational and taking high-quality pictures. It has been reported that South Korea is considering utilizing the satellite to obtain reconnaissance on North Korean activities. == References == | {"page_id": 51600173, "title": "Ofeq-11"} |
a design requirement for low weight. === Allocated requirements === A requirement is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items. Well-known requirements categorization models include FURPS and FURPS+, developed at Hewlett-Packard. == Requirements analysis issues == === Stakeholder issues === Steve McConnell, in his book Rapid Development, details a number of ways users can inhibit requirements gathering: Users do not understand what they want or users do not have a clear idea of their requirements Users will not commit to a set of written requirements Users insist on new requirements after the cost and schedule have been fixed Communication with users is slow Users often do not participate in reviews or are incapable of doing so Users are technically unsophisticated Users do not understand the development process Users do not know about present technology This may lead to the situation where user requirements keep changing even when system or product development has been started. === Engineer/developer issues === Possible problems caused by engineers and developers during requirements analysis are: A natural inclination towards writing code can lead to implementation beginning before the requirements analysis is complete, potentially resulting in code changes to meet actual requirements once they are known. Technical personnel and end-users may have different vocabularies. Consequently, they may wrongly believe they are in perfect agreement until the finished product is supplied. Engineers and developers may try to make the requirements fit an existing system or model, rather than develop a system specific to the needs of the client. === Attempted solutions === One attempted solution to communications problems has been to employ specialists in business or system | {"page_id": 522449, "title": "Requirements analysis"} |
a TLS handshake. It’s used by an HTTP client and server to negotiate which HTTP version to use (e.g. HTTP/2 or HTTP/1.1). Previously, the implementation would end up allocating a byte[] for use with this HTTP version selection, but now with this PR, the implementation precomputes byte[]s for the most common protocol selections, avoiding the need to re-allocate those byte[]s on each new connection. dotnet/runtime#81096 removes a delegate allocation by moving some code around between the main SslStream implementation and the Platform Abstraction Layer (PAL) that’s used to handle OS-specific code (everything in the SslStream layer is compiled into System.Net.Security.dll regardless of OS, and then depending on the target OS, a different version of the SslStreamPal class is compiled in). dotnet/runtime#84690 from @am11 avoids a gigantic Dictionary that was being created to enable querying for information about a particular cipher suite for use with TLS. Instead of a dictionary mapping a TlsCipherSuite enum to a TlsCipherSuiteData struct (which contained details like an ExchangeAlgorithmType enum value, a CipherAlgorithmType enum value, an int CipherAlgorithmStrength, etc.), a switch statement is used, mapping that same TlsCipherSuite enum to an int that’s packed with all the same information. This not only avoids the run-time costs associated with allocating that dictionary and populating it, it also shaves almost 20Kb off a published Native AOT binary, due to all of the code that was necessary to populate the dictionary. dotnet/runtime#84921 from @am11 uses a similar switch for well-known OIDs. dotnet/runtime#86163 changed an internal ProtocolToken class into a struct, passing it around by ref instead. dotnet/runtime#74695 avoids some SafeHandle allocation in interop as part of certificate handling on Linux. SafeHandles are a valuable reliability feature in .NET: they wrap a native handle / file descriptor, providing the finalizer that ensures the resource isn’t leaked, but also providing ref | {"source": 1754, "title": "from dpo"} |
{} ) } 4.1.6 Simple to Share By not providing any functionality, ADTs can have a minimal set of dependencies. This makes them easy to publish and share with other developers. By using a simple data modelling language, it makes it possible to interact with cross-discipline teams, such as DBAs, UI developers and business analysts, using the actual code instead of a hand written document as the source of truth. Furthermore, tooling can be more easily written to produce or consume schemas from other programming languages and wire protocols. 4.1.7 Counting Complexity The complexity of a data type is the count of values that can exist. A good data type has the least amount of complexity it needs to hold the information it conveys, and no more. Values have a built-in complexity: Unit has one value (why it is called “unit”) Boolean has two values Int has 4,294,967,295 values String has effectively infinite values To find the complexity of a product, we multiply the complexity of each part. (Boolean, Boolean) has 4 values (2*2) (Boolean, Boolean, Boolean) has 8 values (2*2*2) To find the complexity of a coproduct, we add the complexity of each part. (Boolean |: Boolean) has 4 values (2+2) (Boolean |: Boolean |: Boolean) has 6 values (2+2+2) To find the complexity of a ADT with a type parameter, multiply each part by the complexity of the type parameter: Option[Boolean] has 3 values, Some[Boolean] and None (2+1) In FP, functions are total and must return an value for every input, no Exception. Minimising the complexity of inputs and outputs is the best way to achieve totality. As a rule of thumb, it is a sign of a badly designed function when the complexity of a function’s return value is larger than the product of its inputs: it | {"source": 4170, "title": "from dpo"} |
869–880.")]. This prior work includes the Syndrome-based algorithm [21. Syndrome based check node processing of high order NB-LDPC decoders. In Telecommunications (ICT), 2015 22nd International Conference on (pp. 156–162).")] that efficiently performed parallel CN computations for \(q \ge 16\) and was initially considered for implementing a GF(256) CN processor with a CN degree \(d_c = 4\) [22. A new architecture for high speed, low latency NB-LDPC check node processing for GF(256). In 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring) (pp. 1–5).")]. However, its complexity is dominated by the number of computed syndromes, which increases quadratically with \(d_c\). This limits its interest for high coding rates (i.e., high \(d_c\) values). A solution was then proposed based on sorting the input vectors according to a reliability criteria [23. NB-LDPC check node with pre-sorted input. In 2016 9th International Symposium on Turbo Codes and Iterative Information Processing (ISTC) (pp. 196–200)."),24. Pre-sorted forward-backward NB-LDPC check node architecture. In IEEE Workshop on Signal Processing Systems.")] to significantly reduce the CN hardware complexity without affecting performance. This so-called _presorting_ technique was applied to the syndrome-based architecture in [23. NB-LDPC check node with pre-sorted input. In 2016 9th International Symposium on Turbo Codes and Iterative Information Processing (ISTC) (pp. 196–200).")] and to the FB architecture in [24. Pre-sorted forward-backward NB-LDPC check node architecture. In IEEE Workshop on Signal Processing Systems.")]. A hybridization of those two architectures was presented in [( "Marchand, C., | {"source": 6650, "title": "from dpo"} |
In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means. Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations. == Systematic botany == Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, | {"page_id": 4183, "title": "Botany"} |
In homogeneous catalysis, C2-symmetric ligands refer to ligands that lack mirror symmetry but have C2 symmetry (two-fold rotational symmetry). Such ligands are usually bidentate and are valuable in catalysis. The C2 symmetry of ligands limits the number of possible reaction pathways and thereby increases enantioselectivity, relative to asymmetrical analogues. C2-symmetric ligands are a subset of chiral ligands. Chiral ligands, including C2-symmetric ligands, combine with metals or other groups to form chiral catalysts. These catalysts engage in enantioselective chemical synthesis, in which chirality in the catalyst yields chirality in the reaction product. == Examples == An early C2-symmetric ligand, diphosphine catalytic ligand DIPAMP, was developed in 1968 by William S. Knowles and coworkers of Monsanto Company, who shared the 2001 Nobel Prize in Chemistry. This ligand was used in the industrial production of L-DOPA. Some classes of C2-symmetric ligands are called privileged ligands, which are ligands that are broadly applicable to multiple catalytic processes, not only a single reaction type. Ligands and Complexes == Mechanistic concepts == While the presence of any symmetry element within a ligand intended for asymmetric induction might appear counterintuitive, asymmetric induction only requires that the ligand be chiral (i.e. have no improper rotation axis). Asymmetry (i.e. absence of any symmetry elements) is not required. C2 symmetry improves the enantioselectivity of the complex by reducing the number of unique geometries in the transition states. Steric and kinetic factors then usually favor the formation of a single product. === Chiral fence === Chiral ligands work by asymmetric induction somewhere along the reaction coordinate. The image to the right illustrates how a chiral ligand may induce an enantioselective reaction. The ligand (in green) has C2 symmetry with its nitrogen, oxygen or phosphorus atoms hugging a central metal atom (in red). In this particular ligand the right side is sticking | {"page_id": 56127040, "title": "C2-Symmetric ligands"} |
the more general metal-to-nonmetal transitions phenomena which were intensively studied in the last decades. A one-dimensional analytic model of laser induced distortion of band structure was presented for a spatially periodic (cosine) potential. This problem is periodic both in space and time and can be solved analytically using the Kramers-Henneberger co-moving frame. The solutions can be given with the help of the Mathieu functions. == In semiconductor physics == Every solid has its own characteristic energy-band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials. Depending on the dimension, the band structure and spectroscopy can vary. The different types of dimensions are as listed: one dimension, two dimensions, and three dimensions. In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions because there are no allowable electronic states for them to occupy. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for a valence band electron to be promoted to the conduction band, it requires a specific minimum amount of energy for the transition. This required energy is an intrinsic characteristic of the solid material. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light). A semiconductor is a material with an intermediate-sized, non-zero band gap that behaves as an insulator at T=0K, but allows thermal excitation of electrons into its conduction band at temperatures that are below its melting point. In contrast, a material with a large band gap is an insulator. In conductors, the valence and conduction bands may overlap, so | {"page_id": 118396, "title": "Band gap"} |
Hypotaurine is a sulfinic acid that is an intermediate in the biosynthesis of taurine. Like taurine, it also acts as an endogenous neurotransmitter via action on the glycine receptors. It is an osmolyte with antioxidant properties. Hypotaurine is derived from cysteine (and homocysteine). In mammals, the biosynthesis of hypotaurine from cysteine occurs in the pancreas. In the cysteine sulfinic acid pathway, cysteine is first oxidized to its sulfinic acid, catalyzed by the enzyme cysteine dioxygenase. Cysteine sulfinic acid, in turn, is decarboxylated by sulfinoalanine decarboxylase to form hypotaurine. Hypotaurine is enzymatically oxidized to yield taurine by hypotaurine dehydrogenase. == References == | {"page_id": 19152656, "title": "Hypotaurine"} |
predict its position for several million years (both forward and backward in time), but after intervals much longer than the Lyapunov time of 10–20 million years, calculations become unreliable: Pluto is sensitive to immeasurably small details of the Solar System, hard-to-predict factors that will gradually change Pluto's position in its orbit. The semi-major axis of Pluto's orbit varies between about 39.3 and 39.6 AU with a period of about 19,951 years, corresponding to an orbital period varying between 246 and 249 years. The semi-major axis and period are presently getting longer. === Relationship with Neptune === Despite Pluto's orbit appearing to cross that of Neptune when viewed from north or south of the Solar System, the two objects' orbits do not intersect. When Pluto is closest to the Sun, and close to Neptune's orbit as viewed from such a position, it is also the farthest north of Neptune's path. Pluto's orbit passes about 8 AU north of that of Neptune, preventing a collision. This alone is not enough to protect Pluto; perturbations from the planets (especially Neptune) could alter Pluto's orbit (such as its orbital precession) over millions of years so that a collision could happen. However, Pluto is also protected by its 2:3 orbital resonance with Neptune: for every two orbits that Pluto makes around the Sun, Neptune makes three, in a frame of reference that rotates at the rate that Pluto's perihelion precesses (about 0.97×10−4 degrees per year). Each cycle lasts about 495 years. (There are many other objects in this same resonance, called plutinos.) At present, in each 495-year cycle, the first time Pluto is at perihelion (such as in 1989), Neptune is 57° ahead of Pluto. By Pluto's second passage through perihelion, Neptune will have completed a further one and a half of its own orbits, | {"page_id": 44469, "title": "Pluto"} |
areas. He or she must be able to effectively utilize the services provided by scientists, lawyers, accountants, and business people of many kinds. Naval architects typically work for shipyards, ship owners, design firms and consultancies, equipment manufacturers, Classification societies, regulatory bodies (Admiralty law), navies, and governments. A small majority of Naval Architects also work in education, of which only 5 universities in the United States are accredited with Naval Architecture & Marine Engineering programs. The United States Naval Academy is home to one of the most knowledgeable professors of Naval Architecture; CAPT. Michael Bito, USN. == List of naval architecture software == Aveva - Tribon FORAN System Orca3D - plugin for Rhinoceros 3D Safehull Sesam == See also == == References == == Further reading == Ferreiro, Larrie D. (2007). Ships and Science: The Birth of Naval Architecture in the Scientific Revolution, 1600–1800. MIT Press. ISBN 978-0-262-06259-6. Ferreiro, Larrie D. (2020). Bridging the Seas: The Rise of Naval Architecture in the Industrial Age, 1800–2000. MIT Press. ISBN 978-0-262-53807-7. Paasch, H. Dictionary of Naval Terms, from Keel to Truck. London: G. Philip & Son, 1908. | {"page_id": 76653, "title": "Naval architecture"} |
HD 40307 f is an extrasolar planet orbiting the star HD 40307. It is located 42 light-years away in the direction of the southern constellation Pictor. The planet was discovered by the radial velocity method, using the European Southern Observatory's HARPS apparatus by a team of astronomers led by Mikko Tuomi at the University of Hertfordshire and Guillem Anglada-Escude of the University of Göttingen, Germany. The existence of planet was confirmed in 2015. == Planetary characteristics == This planet is the fifth planet from the star, at a distance of about 0.25 AU (compared to 0.39 AU for Mercury) with negligible eccentricity. HD 40307 f's minimum mass is 5.2 that of Earth, and dynamical models suggest it cannot be much more (and so is measured close to edge-on). Planets like this in that system have been presumed "super-Earth". Even though HD 40307 f is closer to the star than Mercury is to the Sun, it gets (slightly) less insolation than Mercury gets because the parent star is dimmer than our home star. It still gets more heat than Venus gets (like Gliese 581 c), and it has more gravitational potential than Venus has. HD 40307 f is more likely a super-Venus than a "super-Earth". Moreover, planets b, c, and d are presumed to have migrated in from outer orbits; and planet b is predicted a sub-Neptune. == References == == External links == "Super-Earth Discovered in Star's Habitable Zone". Exoplanets. 2017-05-10. Archived from the original on 2012-11-28. Retrieved 2012-11-10. | {"page_id": 37601769, "title": "HD 40307 f"} |
format in North America. Included in the SACDs in this series are the full English lyrics of each album, and also Japanese translations of the lyrics. == Background == Most of the releases have previously been available on DVD-Audio in the United States, although at least the Deep Purple album Machine Head has previously been released in the SACD format by EMI in Europe. Some of these new releases were released as 4-channel quadraphonic LP records in the 1970s, although such plans for albums as Hotel California by The Eagles, which had been mixed for a quadraphonic release, were dropped following the demise of that format. The DVD-Audio and the SACD formats have been involved in a format war since the late 1990s. DVD-Audio had been endorsed by the Warner Music Group, while the SACD format had been endorsed by Sony and Universal Music Group, with an especially high profile by Virgin Records of the Universal Music Group, most notably for the back catalogue of the all but one of the Genesis studio albums and for Mike Oldfield’s Tubular Bells. (Virgin has not rights for the first Genesis album, From Genesis to Revelation.) EMI has wavered between the two, with titles existing in both formats. These include Machine Head by Deep Purple on SACD and such titles as A Night At The Opera by Queen and the Beatles compilation Love on DVD-Audio. The decision on the latter may have reflected the fact that it is a soundtrack for a show that is presented in Las Vegas in surround sound, and thus has an important market in North America (see below). For some reason, Sony failed to promote SACDs actively in North America, with the result that DVD-Audios more or less reigned in the surround music market in there. Elsewhere, though, | {"page_id": 33585611, "title": "The Warner Premium Sound series"} |
its Energiewende goal, equal to the maximum historical value thus far. Germany spends €1.5 billion per annum on energy research (2013 figure) in an effort to solve the technical and social issues raised by the transition. This includes a number of computer studies that have confirmed the feasibility and a similar cost (relative to business-as-usual and given that carbon is adequately priced) of the Energiewende. These initiatives go well beyond European Union legislation and the national policies of other European states. The policy objectives have been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 22.9% in 2012, surpassing the OECD average of 18% usage of renewables. Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. However, in some cases poor investment designs have caused bankruptcies and low returns, and unrealistic promises have been shown to be far from reality. Nuclear power plants were closed, and the existing nine plants will close earlier than planned, in 2022. One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8,300 km of power lines must be built or upgraded. The different German States have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills. == Voluntary market mechanisms for renewable electricity == | {"page_id": 10418624, "title": "Renewable energy commercialization"} |
John Desmond Sinclair (14 March 1927 – 11 February 2018) was a New Zealand neurophysiologist and middle-distance athlete who represented his country at the 1950 British Empire Games. He was involved in the establishment of the medical school at the University of Auckland in 1968, and was the school's foundation professor of physiology. == Early life and family == Born in Auckland on 14 March 1927, Sinclair was the fourth of 10 children of Ernest Duncan Sinclair and Florence Pyrenes Sinclair (née Kennedy). His siblings included the historian Keith Sinclair and the journalist and talkback radio host Geoff Sinclair. Jack Sinclair was educated at Mount Albert Grammar School, and went on to study at the University of Otago, from where he graduated Bachelor of Medical Sciences in 1948 and MB ChB. In 1952, Sinclair married Patricia Colleen Dunn, and the couple went on to have four children. == Athletics == Sinclair was prominent as a middle-distance athlete during his time as a student at the University of Otago. At the 1946 New Zealand University Easter tournament in Christchurch, he won the one mile, finished second in the 880 yards and third in the three miles. In 1947, he won the Otago 880 yards championship in a time of 2:02.2, defeating Arch Jelley, and was the New Zealand University champion for both the 880 yards and one mile. The following year, representing Otago, Sinclair won the first of his two New Zealand national one-mile titles, defeating Maurice Marshall in a time of 4:23.4 at the national championships in Dunedin, and also finished second in the 880 yards. A few weeks later, he broke Jack Lovelock's university record of 4:28.0 for the mile, running 4:23.6 at the Otago University sports. Also in 1948, Sinclair retained his New Zealand University 880 yards and | {"page_id": 69701496, "title": "Jack Sinclair (physiologist)"} |
Figure 5 shows how these various refinements pick out different elements of the target word’s linguistic environment. The pipe or bar notation ( |) is simply to create pairs, or tuples — for example pairing a word with its part-of-speech tag. The term contextual element is used to refer to a basis vector term which is present in the context of a particular instance of the target word. The intuition for building the word vectors remains the same, but now the basis vectors are more complex. For example, in the grammatical relations case, counts are required for the number of times that goal , say, occurs as the direct object of the verb scored ; and in an adjective modifier relation with first ; and so on for all word-grammatical relation pairs chosen to constitute the basis vectors. The idea is that these more informative linguistic relations will be more indicative of the meaning of the target word. The linguistic processing applied to the sentence in the example is standard in the Computational Linguistics literature. The part-of-speech tags are from the Penn Treebank tagset (Marcus et al. , 1993) and could be automatically applied using any number of freely available part-of-speech taggers (Brants, 2000; Toutanova et al. , 2003; Curran & Clark, 2003). The grammatical re-lations — expressing syntactic head-dependency relations — are from the Briscoe & Carroll (2006) scheme, and could be automatically applied using e.g. the RASP (Briscoe et al. , 2006) or C&C (Clark & Curran, 2007) parsers. Another standard practice is to lemmatise the sentence (Minnen et al. , 2001) so that the direct object of scored , for example, would be equated with the direct object of score or scores (i.e. each of these three word-grammatical relation pairs would correspond to the same basis vector). | {"source": 983, "title": "from dpo"} |
purpose of this function is to facilitate the usage of multigraphs with the VF2 algorithm. spectrum 447 Value a new graph object with the edges deleted. Related documentation in the C library igraph_simplify() , igraph_is_simple() . Author(s) Gabor Csardi See Also which_loop() , which_multiple() and count_multiple() , delete_edges() , delete_vertices() Other functions for manipulating graph structure: +.igraph() , add_edges() , add_vertices() , complementer() , compose() , connect() , contract() , delete_edges() , delete_vertices() , difference() , difference.igraph() , disjoint_union() , edge() , igraph-minus , intersection() , intersection.igraph() , path() , permute() , rep.igraph() , reverse_edges() , union() , union.igraph() , vertex() Examples > g <- make_graph(c(1, 2, 1, 2, 3, 3)) is_simple(g) is_simple(simplify(g, remove.loops = FALSE)) is_simple(simplify(g, remove.multiple = FALSE)) is_simple(simplify(g)) spectrum Eigenvalues and eigenvectors of the adjacency matrix of a graph Description Calculate selected eigenvalues and eigenvectors of a (supposedly sparse) graph. Usage spectrum( graph, algorithm = c("arpack", "auto", "lapack", "comp_auto", "comp_lapack", "comp_arpack"), which = list(), options = arpack_defaults() )448 spectrum Arguments graph The input graph, can be directed or undirected. algorithm The algorithm to use. Currently only arpack is implemented, which uses the ARPACK solver. See also arpack() . which A list to specify which eigenvalues and eigenvectors to calculate. By default the leading (i.e. largest magnitude) eigenvalue and the corresponding eigenvector is calculated. options Options for the ARPACK solver. See arpack_defaults() . Details The which argument is a list and it specifies which eigenvalues and corresponding eigenvectors to calculate: There are eight options: 1. Eigenvalues with the largest magnitude. Set pos to LM , and howmany to the number of eigen-values you want. 2. Eigenvalues with the smallest magnitude. Set pos to SM and howmany to the number of eigen-values you want. 3. Largest eigenvalues. Set pos to LA and howmany to the number of eigenvalues you | {"source": 2689, "title": "from dpo"} |
bed deepens every year. Cohoes, in the memory of many people now living, was insulated by every flood of the river. What was the eastern channel has now become a lake, 9 miles in length and one in width, into which the river at this day never flows. This river yields turtle of a peculiar kind, perch, trout, gar, pike, mullets, herrings, carp, spatula-fish of 50 lb. weight, cat-fish of 100 lb. weight, buffalo fish, and sturgeon. Aligators or crocodiles have been seen as high up as the Acansas. It also abounds in herons, cranes, ducks, brant, geese, and swans. Its passage is commanded by a fort established by this state, five miles below the mouth of Ohio, and ten miles above the Carolina boundary. The Missouri, since the treaty of Paris, the Illinois and northern branches of the Ohio, since the cession to Congress, are no longer within our limits. Yet having been so heretofore, and still opening to us channels of extensive communication with the western and north-western country, they shall be noted in their order. The Missouri is, in fact, the principal river, contributing more to the common stream than does the Missisipi, even after its junction with the Illinois. It is remarkably cold, muddy and rapid. Its overflowings are considerable. They happen during the months of June and July. Their commencement being so much later than those of the Missisipi, would induce a belief that the sources of the Missouri are northward of those of the Missisipi, unless we suppose that the cold increases again with the ascent of the land from the Missisipi westwardly. That this ascent is * * * Page 7 great, is proved by the rapidity of the river. Six miles above the mouth it is brought within the compass of a | {"source": 4964, "title": "from dpo"} |
k, offers a tradeoff between computational power and bandwidth. The bandwidth reduction is independent of the depth of the tree; it depends only on the branching factor. We find experimentally that with a branching factor of k = 1024, which provides a factor of 10 reduction in bandwidth, it takes 110.1 milliseconds on average per leaf to construct a Verkle Tree with $2^{14}$ leaves. A branching factor of k = 32, which provides a bandwidth reduction factor of 5, yields a construction time of 8.4 milliseconds on average per leaf for a tree with $2^{14}$ leaves. (The performance on a tree with $2^{14}$ leaves is representative of larger trees because the asymptotics already dominate the computation costs.) My role in this research project has been proving the time complexities of Verkle Trees, implementing Verkle Trees, and testing and benchmarking the implementation. ### 172) Andrew Ahn (MIT), Gopal Goel (PRIMES), and Andrew Yao (PRIMES), Derivative Asymptotics of Uniform Gelfand-Tsetlin Patterns !Image 374 William Fisher, Polynomial Wolff Axioms and Multilinear Kakeya-type Estimates for Bent Tubes in $R^n$ , which is called «Vinberg’s Naroch Biological Station» and used both for teaching and research purposes. == Education == • High Education Diploma is five years of studies for full-time students, and six years for distance learning students; totally about 5000 hours of auditorium and laboratory work. It is widely accepted as equivalent to Bachelor-level degree in the Western Europe. Note that the number of lecture and practical classes is significantly higher (sometimes by 10–15 times) in BSU's High Education Diploma than in most universities of Western Europe, North America or Australia (B.Sc. Course). • Magistrature is usually one year-long postgraduate course, which is equivalent to Masters Course in English speaking countries. | {"page_id": 37302349, "title": "Belarusian State University Faculty of Biology"} |
Sharon J. Goldwater is an American and British computer scientist, cognitive scientist, developmental linguist, and natural language processing researcher who holds the Personal Chair of Computational Language Learning in the University of Edinburgh School of Informatics. Her research involves the unsupervised learning of language by computers, and computer modeling of language development in children. == Education and career == Goldwater is a 1998 graduate of Brown University, and worked as a researcher at SRI International from 1998 to 2000. She then returned to Brown for graduate study in cognitive and linguistic sciences, completing her Ph.D. in 2006. Her dissertation, Nonparametric Bayesian Models of Lexical Acquisition, was supervised by Mark Johnson. After postdoctoral research at Stanford University, she took her present position at the University of Edinburgh. She was given a personal chair in 2018. == Recognition == Goldwater was the 2016 winner of the Roger Needham Award of the British Computer Society. == References == == External links == Home page Sharon Goldwater publications indexed by Google Scholar | {"page_id": 71405390, "title": "Sharon Goldwater"} |
dissolved in organic solvent and hemicelluloses are used to produce more organic solvent. Organic solvents are collected by separating water from the cooking liquor and then the lignin is precipitated by adding water, heat, and filtration. === Aldehyde assisted fractionation (AAF) process === The Bloom process was developed at EPFL in Lausanne and is commercialised by Bloom Biorenewables Sàrl. This method is based on a protection chemistry that prevents lignin and C5 sugars condensation. == References == | {"page_id": 30537541, "title": "Organosolv"} |
of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows.: 120–121 There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.: 112–113 === Propagators === Finally, one has to compute P(A to | {"page_id": 25268, "title": "Quantum electrodynamics"} |
Star hopping is a technique that amateur astronomers often use to locate astronomical objects in the night sky. It can be used instead of or in addition to setting circles or go-to/push-to systems. == The problem == Many celestial objects of interest are too faint to be visible to the unaided eye. Telescopes or binoculars collect much more light, making faint objects visible, but have a smaller field of view, thus complicating orientation on the sky. The field of view of binoculars is rarely more than eight degrees, while that of typical amateur telescopes may be substantially less than one degree, depending on the magnification used. Many objects are best observed using higher magnifications, which inevitably go along with narrow fields of view. == The technique == Star hopping uses bright stars as a guide to finding fainter objects. A knowledge of the relative positions of bright stars and target objects is essential. After planning the star hop with the aid of a star chart, the observer first locates one or more bright stars in a finderscope, reflex sight, or, at a low magnification, with the instrument to be used for observation. The instrument is then moved by one or more increments, possibly using a reticle to identify specific angular distances, to follow identified patterns of stars in the sky (known as asterisms), until the target object is reached. Using a telescope equipped with a properly aligned equatorial mount, the observer may also follow the equatorial coordinate system on a star map to move along the lines of right ascension or declination from a well known object to find a target. This can be assisted using setting circles. Once an instrument is centered on the target object, higher magnifications may be used for observation. == Example == A simple example | {"page_id": 841045, "title": "Star hopping"} |
blood is in a mixture of fluids. Some examples for DNA methylation markers are Mens1(menstrual blood), Spei1(saliva), and Sperm2(seminal fluid). DNA methylation provides a relatively good means of sensitivity when identifying and detecting body fluids. In one study, only ten nanograms of a sample was necessary to ascertain successful results. DNA methylation provides a good discernment of mixed samples since it involves markers that give "on or off" signals. DNA methylation is not impervious to external conditions. Even under degraded conditions using the DNA methylation techniques, the markers are stable enough that there are still noticeable differences between degraded samples and control samples. Specifically, in one study, it was found that there were not any noticeable changes in methylation patterns over an extensive period of time. The detection of DNA methylation in cell-free DNA and other body fluids has recently become one of the main approaches to Liquid biopsy. In particular, the identification of tissue-specific and disease-specific patterns allows for non-invasive detection and monitoring of diseases such as cancer. If compared to strictly genomic approaches to liquid biopsy, DNA methylation profiling offers a larger number of differentially methylated CpG sites and differentially methylated regions (DMRSs), potentially enhancing its sensitivity. Signal deconvolution algorithms based on DNA methylation have been successfully applied to cell-free DNA and can nominate the tissue of origin of cancers of unknown primary, allograft rejection, and resistance to hormone therapy. == Computational prediction == DNA methylation can also be detected by computational models through sophisticated algorithms and methods. Computational models can facilitate the global profiling of DNA methylation across chromosomes, and often such models are faster and cheaper to perform than biological assays. Such up-to-date computational models include Bhasin, et al., Bock, et al., and Zheng, et al. Together with biological assay, these methods greatly facilitate the DNA | {"page_id": 1137227, "title": "DNA methylation"} |
and grooming, the epigenetic differences are reversed, supporting a causal relationship between the maternal effect and the epigenetic stress responses in offspring. This proposes that the offspring's epigenome can be altered and established through early life experiences. The effect of stress on sleep can be predicted long before a baby is born. It is hypothesized that increasing cortisol levels in mothers reduces the amount of glucocorticoid receptors (GRs) in an infant's hippocampus, lowering the physiological role of the negative feedback loop on the hypothalamic-pituitary-adrenal (HPA) axis. The HPA axis is important for regulating the wake-sleep cycle but works with other factors that help modulate sleep as well. When the negative feedback loop is disrupted due to stress, the HPA axis in newborns becomes hyperactive and the amount of cortisol in circulation elevates. However, the hyperactivity of the HPA axis and the elevated levels of cortisol in the hippocampus can be reversed or lowered to normal levels after demethylation of the hippocampal GR promoter, further providing evidence of the involvement of epigenetic mechanisms in HPA axis modifications. Glucocorticoids are a necessity for life. They play a large role in a majority of physiological functions involving metabolism, blood pressure, breathing, the immune system, and behavior. Either acute or chronic stress can alter the response of the HPA axis. However, the stage of life at which an individual is exposed to stress will determine the magnitude of the consequences they will face in the future. Early life exposure to stress during the critical period of childhood development can result in permanent changes to adult response systems. == Sleep deprivation == Sleep deprivation is a significant societal problem. It is estimated that around 35.2% of all adults in the US sleep less than 7 hours. Lifestyle choices, health conditions, and the use of stimulants | {"page_id": 69860092, "title": "Sleep epigenetics"} |
Cullen number Proth number Woodall number == References == == Further reading == Guy, Richard K. (2004), Unsolved Problems in Number Theory, New York: Springer-Verlag, p. 120, ISBN 0-387-20860-7 == External links == The Sierpinski problem: definition and status Weisstein, Eric W. "Sierpinski's composite number theorem". MathWorld. Archived at Ghostarchive and the Wayback Machine: Grime, Dr. James (13 November 2017). "78557 and Proth Primes" (video). YouTube. Brady Haran. Retrieved 13 November 2017. | {"page_id": 169570, "title": "Sierpiński number"} |
=== Phytostabilisation === Phytostabilisation is a form of phytoremediation that uses hyperaccumulator plants for long-term stabilisation and containment of tailings, by sequestering pollutants in soil near the roots. The plant's presence can reduce wind erosion, or the plant's roots can prevent water erosion, immobilise metals by adsorption or accumulation, and provide a zone around the roots where the metals can precipitate and stabilise. Pollutants become less bioavailable and livestock, wildlife, and human exposure is reduced. This approach can be especially useful in dry environments, which are subject to wind and water dispersion. === Different methods === Considerable effort and research continues to be made into discovering and refining better methods of tailings disposal. Research at the Porgera Gold Mine is focusing on developing a method of combining tailings products with coarse waste rock and waste muds to create a product that can be stored on the surface in generic-looking waste dumps or stockpiles. This would allow the current use of riverine disposal to cease. Considerable work remains to be done. However, co-disposal has been successfully implemented by several designers including AMEC at, for example, the Elkview Mine in British Columbia. === Pond reclamation by microbiology === During extraction of the oil from oil sand, tailings consisting of water, silt, clays, and other solvents are also created. This solid will become mature fine tailings by gravity. Foght et al (1985) estimated that there are 103 anaerobic heterotrophs and 104 sulfate-reducing prokaryotes per milliliter in the tailings pond, based on conventional most probable number methods. Foght set up an experiment with two tailings ponds and an analysis of the archaea, bacteria, and the gas released from tailings ponds showed that those were methanogens. As the depth increased, the moles of CH4 released actually decreased. Siddique (2006, 2007) states that methanogens in the | {"page_id": 469762, "title": "Tailings"} |
Small Outline Diode (SOD) is a designation for a group of semiconductor packages for surface mounted diodes. The standard includes multiple variants such as SOD-123, SOD-323, SOD-523 and SOD-923. SOD-123 is the largest, SOD-923 is the smallest. == References == | {"page_id": 41403403, "title": "Small Outline Diode"} |
string ; "sizeof(x): %d\n" 8048422: e8 d9 fe ff ff call 8048300 ; printf 8048427: c7 44 24 08 04 00 00 movl $0x4,0x8(%esp) ; len arg to 804842e: 00 ; strncpy 804842f: 8b 45 0c mov 0xc(%ebp),%eax 8048432: 89 44 24 04 mov %eax,0x4(%esp) ; data to copy 8048436: 89 1c 24 mov %ebx,(%esp) ; where to write ; ebx = adjusted esp + 12 (see 0x8048410) 8048439: e8 e2 fe ff ff call 8048320 ; strncpy 804843e: 89 f4 mov %esi,%esp ; restore esp 8048440: b8 3a 00 00 00 mov $0x3a,%eax ; ready to return 58 8048445: 8d 65 f8 lea 0xfffffff8(%ebp),%esp ; we restore esp again, just in case it ; didn't happen in the first place. 8048448: 5b pop %ebx 8048449: 5e pop %esi 804844a: 5d pop %ebp 804844b: c3 ret ; restore registers and return. What can we learn from the above assembly output? 1) There is some rounding done on the supplied value, thus meaning small negative values (-15 > -1) and small values (1 - 15) will become 0. This might possibly be useful, as we'll see below. When the supplied value is -16 or less, then it will be possible to move the stack pointer backwards (closer to the top of the stack). The instruction sub $eax, %esp at 0x804840e can be seen as add $16, %esp when len is -16. 2) The stack pointer is subtracted by the paragraph-aligned supplied value. Since we can supply an almost arbitary value to this, we can point the stack pointer at a specified paragraph. If the stack pointer value is known, we can calcuate the offset needed to point the stack at that location in memory. This allows us to modify writable sections such as the GOT and heap. 3) gcc | {"source": 1201, "title": "from dpo"} |
0]. Keeping the default values, we get: > from sklearn.cluster import KMeans >>> km =KMeans(n_clusters=3) >>> km.fit(X) KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, Clustering Fundamentals [ 185 ] n_clusters=3, n_init=10, n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001, verbose=0) >>> print(km.cluster_centers_) [[ 1.39014517, 1.38533993] [ 9.78473454, 6.1946332 ][-5.47807472, 3.73913652]] Replotting the data using three different markers, it's possible to verify how k-means successfully separated the data: Clustering Fundamentals [ 186 ] In this case, the separation was very easy because k-means is based on Euclidean distance, which is radial, and therefore the clusters are expected to be convex. When this doesn't happen, the problem cannot be solved using this algorithm. Most of the time, even if the convexity is not fully guaranteed, k-means can produce good results, but there are several situations when the expected clustering is impossible and letting k-means find out the centroid can lead to completely wrong solutions. Let's consider the case of concentric circles. scikit-learn provides a built-in function to generate such datasets: > from sklearn.datasets import make_circles >>> nb_samples =1000 >>> X, Y=make_circles(n_samples=nb_samples, noise=0.05) The plot of this dataset is shown in the following figure: Clustering Fundamentals [ 187 ] We would like to have an internal cluster (corresponding to the samples depicted with triangular markers) and an external one (depicted by dots). However, such sets are not convex, and it's impossible for k-means to separate them correctly (the means should be the same!). In fact, suppose we try to apply the algorithm to two clusters: > >>> km =KMeans(n_clusters=2) >>> km.fit(X) KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, n_clusters=2, n_init=10, n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001, verbose=0) We get the separation shown in the following figure: Clustering Fundamentals [ 188 ] As expected, k-means converged on the two centroids in the middle of the two half-circles, and the resulting clustering is quite different from what | {"source": 3662, "title": "from dpo"} |
confidentiality of all of the plaintext blocks corresponding to that counter block may be compromised. In particular, if any plaintext block that is encrypted using a given counter block is known, then the output of the forward cipher function can be determined easily from the associated ciphertext block. This output allows any other plaintext blocks that are encrypted using the same counter block to be easily recovered from their associated ciphertext blocks. There are two aspects to satisfying the uniqueness requirement. First, an incrementing function for generating the counter blocks from any initial counter block can ensure that counter blocks do not repeat within a given message. Second, the initial counter blocks, T1, must be chosen to ensure that counters are unique across all messages that are encrypted under the given key. B.1 The Standard Incrementing Function In general, given the initial counter block for a message, the successive counter blocks are derived by applying an incrementing function. As in the above specifications of the modes, n is the number of blocks in the given plaintext message, and b is the number of bits in the block. The standard incrementing function can apply either to an entire block or to a part of a block. Let m be the number of bits in the specific part of the block to be incremented; thus, m is a positive integer such that m ≤ b. Any string of m bits can be regarded as the binary representation of a non-negative integer x that is strictly less than 2 m . The standard incrementing function takes [x]m and returns [ x+ 1 mod 2 m ] . m For example, let the standard incrementing function apply to the five least significant bits of eight bit blocks, so that b=8 and m=5 (unrealistically small | {"source": 5679, "title": "from dpo"} |
called the field polynomial . The bit string ( am– 1 … a 2 a1 a0) is taken to represent the polynomial am– 1 t m– 1 + …+ a 2 t2 + a 1 t + a 0 over GF (2) . The field arithmetic is implemented as polynomial arithmetic modulo p(t), where p(t) is the field polynomial. • A normal basis is specified by an element θ of a particular kind. The bit string ( a0 a1 a2 … a m– 1) is taken to represent the element a0 θ + a 1 θ 2 + a 2 θ 2 2 + … + am– 1 θ 2 m– 1 . Normal basis field arithmetic is not easy to describe or efficient to implement in general, except for a special class called Type T low-complexity normal bases. For a given field degree m, the choice of T specifies the basis and the field arithmetic (see Appendix D.3). There are many polynomial bases and normal bases from which to choose. The following procedures are commonly used to select a basis representation. • Polynomial Basis : If an irreducible trinomial tm + t k + 1 exists over GF (2) , then the field polynomial p(t) is chosen to be the irreducible trinomial with the lowest-degree middle term tk . If no irreducible trinomial exists, then a pentanomial t m + t a + t b + t c + 1 is selected. The particular pentanomial chosen has the following properties: the second term ta has the lowest degree m; the third term tb has the lowest degree among all irreducible pentanomials of degree m and second term ta; and the fourth term tc has the lowest degree among all irreducible pentanomials of degree m, second term ta, and third | {"source": 6720, "title": "from dpo"} |
the support ship. The launcher also has a high-definition television (HDTV) camera with pan and tilt functions. Initial sea trials of ABISMO were conducted in 2007. The craft successfully reached a planned depth of 9,760-meters, the deepest part of Izu–Ogasawara Trench, where it collected core samples of sediment from the seabed. Plans are underway for a mission to the Challenger Deep. In June 2008, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) deployed the 4,517-ton Deep Sea Research Vessel Kairei to the area of Guam for cruise KR08-05 Leg 1 and Leg 2. On 1–3 June 2008, during Leg 1, the Japanese robotic deep-sea probe ABISMO (Automatic Bottom Inspection and Sampling Mobile) on dives 11-13 almost reached the bottom about 150 km (93 mi) east of the Challenger Deep: "Unfortunately, we were unable to dive to the sea floor because the legacy primary cable of the Kaiko system was a little bit short. The 2-m long gravity core sampler was dropped in free fall, and sediment samples of 1.6m length were obtained. Twelve bottles of water samples were also obtained at various depths..." ABISMO's dive #14 was into the TOTO caldera (12°42.7777 N, 143°32.4055 E), about 60 nmi northeast of the deepest waters of the central basin of the Challenger Deep, where they obtained videos of the hydrothermal plume. Upon successful testing to 10,000 m (32,808 ft), JAMSTEC’ ROV ABISMO became, briefly, the only full-ocean-depth rated ROV in existence. On 31 May 2009, the ABISMO was joined by the Woods Hole Oceanographic Institution's HROV Nereus as the only two operational full ocean depth capable remotely operated vehicles in existence. During the ROV ABISMO's deepest sea trails dive its manometer measured a depth of 10,257 m (33,652 ft) ±3 m (10 ft) in “Area 1” (vicinity of 12°43’ N, 143°33’ | {"page_id": 27873670, "title": "ABISMO"} |
a parameter v as the relative velocity between two inertial reference frames. Using the above conditions, the Lorentz transformation in 3+1 dimensions assume the form: − c 2 t 2 + x 2 + y 2 + z 2 = − c 2 t ′ 2 + x ′ 2 + y ′ 2 + z ′ 2 t ′ = γ ( t − x v c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z | t = γ ( t ′ + x v c 2 ) x = γ ( x ′ + v t ′ ) y = y ′ z = z ′ ⇒ ( c t ′ + x ′ ) = ( c t + x ) c + v c − v ( c t ′ − x ′ ) = ( c t − x ) c − v c + v {\displaystyle {\begin{matrix}-c^{2}t^{2}+x^{2}+y^{2}+z^{2}=-c^{2}t^{\prime 2}+x^{\prime 2}+y^{\prime 2}+z^{\prime 2}\\\hline \left.{\begin{aligned}t'&=\gamma \left(t-x{\frac {v}{c^{2}}}\right)\\x'&=\gamma (x-vt)\\y'&=y\\z'&=z\end{aligned}}\right|{\begin{aligned}t&=\gamma \left(t'+x{\frac {v}{c^{2}}}\right)\\x&=\gamma (x'+vt')\\y&=y'\\z&=z'\end{aligned}}\end{matrix}}\Rightarrow {\begin{aligned}(ct'+x')&=(ct+x){\sqrt {\frac {c+v}{c-v}}}\\(ct'-x')&=(ct-x){\sqrt {\frac {c-v}{c+v}}}\end{aligned}}} In physics, analogous transformations have been introduced by Voigt (1887) related to an incompressible medium, and by Heaviside (1888), Thomson (1889), Searle (1896) and Lorentz (1892, 1895) who analyzed Maxwell's equations. They were completed by Larmor (1897, 1900) and Lorentz (1899, 1904), and brought into their modern form by Poincaré (1905) who gave the transformation the name of Lorentz. Eventually, Einstein (1905) showed in his development of special relativity that the transformations follow from the principle of relativity and constant light speed alone by modifying the traditional concepts of space and time, without requiring a mechanical aether in contradistinction to Lorentz and Poincaré. Minkowski (1907–1908) used them to argue that space and time are inseparably connected as spacetime. Regarding | {"page_id": 7058047, "title": "History of Lorentz transformations"} |
the ground up to be secure. Such systems are secure by design. Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system; important for cryptographic protocols for example. === Capabilities and access control lists === Within computer systems, two of the main security models capable of enforcing privilege separation are access control lists (ACLs) and role-based access control (RBAC). An access-control list (ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users, used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC). A further approach, capability-based security has been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is the E language. === User security training === The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have | {"page_id": 7398, "title": "Computer security"} |
proto-genes) can be purged of “self-evidently deleterious” variants, such as those prone to lead to protein aggregation, and thus enriched in potential adaptations relative to a completely non-expressed and unpurged set of sequences. This revealing and purging of cryptic deleterious non-genic sequences is a byproduct of pervasive transcription and translation of intergenic sequences, and is expected to facilitate the birth of functional de novo protein-coding genes. This is because by eliminating the most deleterious variants, what is left is, by a process of elimination, more likely to be adaptive than expected from random sequences. Using the evolutionary definition of function (i.e. that a gene is by definition under purifying selection against loss), the preadaptation model assumes that “gene birth is a sudden transition to functionality” that occurs as soon as an ORF acquires a net beneficial effect. In order to avoid being deleterious, newborn genes are expected to display exaggerated versions of genic features associated with the avoidance of harm. This is in contrast to the proto-gene model, which expects newborn genes to have features intermediate between old genes and non-genes. The mathematics of the preadaptation model assume that the distribution of fitness effects is bimodal, with new sequences of mutations tending to break something or tinker, but rarely in between. Following this logic, populations may either evolve local solutions, in which selection operates on each individual locus and a relatively high error rate is maintained, or a global solution with a low error rate which permits the accumulation of deleterious cryptic sequences. De novo gene birth is thought to be favored in populations that evolve local solutions, as the relatively high error rate will result in a pool of cryptic variation that is “preadapted” through the purging of deleterious sequences. Local solutions are more likely in populations with a | {"page_id": 60852153, "title": "De novo gene birth"} |
reconstruction == References == Anderson, D., Yedidia, J., Frankel, J., Marks, J., Agarwala, A., Beardsley, P., Hodgins, J., Leigh, D., Ryall, K., & Sullivan, E. (2000). Tangible interaction + graphical interpretation: a new approach to 3D modeling. SIGGRAPH. p. 393-402. Angelidis, A., Cani, M.-P., Wyvill, G., & King, S. (2004). Swirling-Sweepers: Constant-volume modeling. Pacific Graphics. p. 10-15. Grossman, T., Wigdor, D., & Balakrishnan, R. (2004). Multi finger gestural interaction with 3D volumetric displays. UIST. p. 61-70. Freeman, W. & Weissman, C. (1995). Television control by hand gestures. International Workshop on Automatic Face and Gesture Recognition. p. 179-183. Ringel, M., Berg, H., Jin, Y., & Winograd, T. (2001). Barehands: implement-free interaction with a wallmounted display. CHI Extended Abstracts. p. 367-368. Cao, X. & Balakrishnan, R. (2003). VisionWand: interaction techniques for large displays using a passive wand tracked in 3D. UIST. p. 173-182. A. Cassinelli, S. Perrin and M. Ishikawa, Smart Laser-Scanner for 3D Human-Machine Interface, ACM SIGCHI 2005 (CHI '05) International Conference on Human Factors in Computing Systems, Portland, OR, USA April 2–07, 2005, pp. 1138 – 1139 (2005). == External links == http://www.synertial.com/ http://www.vicon.com/ https://stretchsense.com http://www.dgp.toronto.edu/~ravin/videos/graphite2006_proxy.mov https://web.archive.org/web/20091211043000/http://actuality-medical.com/Home.html http://www.dgp.toronto.edu/ http://www.k2.t.u-tokyo.ac.jp/perception/SmartLaserTracking/ Finger tracking using markers or without markers 3D Hand Tracking | {"page_id": 25493306, "title": "Finger tracking"} |
Emeritus professor of neurology. == Career == Mills worked as a general practitioner for several years before turning his focus to nervous and mental diseases. The only other neurologist at the time was Silas Weir Mitchell whose work on Civil War injuries created the framework for the specialty of neurology. In 1877, Mills founded the first neurology department in a general hospital in the United States at the Philadelphia General Hospital. In 1883, he became professor of mind and nervous disorders at the University of Pennsylvania. Mills was invited to participate in the high profile autopsy of Charles J. Guiteau, who was hanged for the 1882 assassination of President James A. Garfield. Mills used the findings from the autopsy to support his theories about the neurology of the criminally insane. He lectured on medical topics at the Wagner Free Institute of Science, the Franklin Institute, Jefferson Medical College and the Woman's Medical College of Pennsylvania. He founded the Philadelphia Polyclinic and taught there as professor of diseases of the mind and nervous system from 1883 to 1898. In 1914, he helped to organize the Philadelphia Post-graduate School of Neurology, and became a clinical professor in 1919. In 1917, he trained the U.S. Medical Reserve Corps on the treatment of nervous disorders arising from World War I battles. His research focused on cerebral localization and described how certain sections of the brain were dedicated to motor and sensory activities. He was interested in language disorders and his work led to further understanding of aphasias. He studied the use of electrotherapeutics for the treatment of various conditions. He studied the effects of tumors in the cerebral cortex, the cerebellum and the spinal cord. He described the first case of blockage of the superior cerebellar artery. He helped pioneer neurosurgery and studied diseases | {"page_id": 39947778, "title": "Charles Karsner Mills"} |
Polar BEAR, short for Polar Beacon Experiment and Auroral Research, was a 1986 U.S. military space mission. Also known as STP P87-1 or STP P87-A, the craft was built for the Air Force by Johns Hopkins University's Applied Physics Laboratory (APL). To save money, the spacecraft was built on the Transit-O satellite retrieved from the National Air and Space Museum, where it had been on display for almost a decade. It was launched on November 13, 1986, from Vandenberg AFB. Its science mission was to investigate communications interference caused by solar flares and auroral activity, continuing the work of the previous HILAT ("High Latitude") mission. == References == == External links == Peterson, Max R.; Grant, David G. (1987). "The Polar BEAR Spacecraft" (PDF). jhuapl.edu. Polar BEAR: 1980's Mission Deployed Museum Satellite Into National Service on YouTube - from Johns Hopkins Applied Physics Laboratory | {"page_id": 74592933, "title": "Polar BEAR"} |
radar operator to manually turn the radar to the approximate angle of the target, in an era when radar systems had to be "locked on" by hand. The system was considered to be of limited utility, and with the introduction of more automated radars they disappeared from fighter designs for some time. == Performance == Detection range varies with external factors such as clouds altitude air temperature target's attitude target's speed The higher the altitude, the less dense the atmosphere and the less infrared radiation it absorbs - especially at longer wavelengths. The effect of reduction in friction between air and aircraft does not compensate for the better transmission of infrared radiation. Therefore, infrared detection ranges are longer at high altitudes. At high altitudes, temperatures range from −30 to −50 °C - which provide better contrast between aircraft temperature and background temperature. The Eurofighter Typhoon's PIRATE IRST can detect subsonic fighters from 50 km from the front and 90 km from the rear - the larger value being the consequence of directly observing the engine exhaust, with an even greater increase being possible if the target uses afterburners. The range at which a target can be identified with sufficient confidence to decide on weapon release is significantly inferior to the detection range - manufacturers have claimed it is about 65% of the detection range. == Tactics == With infrared homing or fire-and-forget missiles, the fighter may be able to fire upon the target without having to turn on its radar sets at all. Otherwise, the fighter can turn the radar on and achieve a lock immediately before firing if desired. The fighter could also close to within cannon range and engage that way. Whether or not they use their radar, the IRST system can still allow them to launch a | {"page_id": 1493194, "title": "Infrared search and track"} |
Kong passed The Hong Kong Institution of Engineers Ordinance which granted the institution statutory status. === Milestones === 1947 - Founding of Engineering Society of Hong Kong 1972 - Amalgamated with the independent Hong Kong Joint Group of the Institutions of Civil, Mechanical and Electrical Engineers of London 1975 - Incorporated by Law as The Hong Kong Institution of Engineers 1982 - Recognition of the HKIE Corporate Members by the Government for Civil Service Appointments 1995 - Admitted to the Washington Accord 1997 - 50th Anniversary of the Founding of Engineering Society of Hong Kong 2001 - Admitted to the Sydney Accord and the Engineers Mobility Forum (EMF) 2003 - Admitted to the Engineering Technologists Mobility Forum (ETMF) 2005 - 30th Anniversary of the HKIE Incorporation 2007 - 60th Anniversary of the Founding of Engineering Society of Hong Kong 2009 - Admitted to the Seoul Accord == Education and Training == The Hong Kong Institution of Engineers has a system to assess the standards of engineering programme in various tertiary institutions in Hong Kong to determine whether to recognise the qualifications of the programme for graduates to join the Institution. The Institution's accreditation board conducts professional assessments of Bachelor's, Associate's and Higher Diploma programmes in Engineering and Computer Science in Hong Kong and overseas universities. The Bachelor of Engineering program that meets the standards can be accredited under the "Washington Accord", and the Bachelor of Computer Science program can be accredited under the "Seoul Accord"; as for each sub-degree program that meets the academic requirements of the Institution, it can be accredited under the "Sydney Accord". The Institution also provides Continuing Professional Development activities to engineers to increase their quality. == Publication == The Hong Kong Institution of Engineers, as an academic body in the engineering field in Hong Kong, | {"page_id": 27221555, "title": "Hong Kong Institution of Engineers"} |
MINNIDIP x RAZR CH(AIR) was announced by Motorola Mobility. == Brand licensing == The company has licensed its brand through the years to several companies and a variety of home products and mobile phone accessories have been released. Motorola Mobility created a dedicated "Motorola Home" website for these products, which sells corded and cordless phones, cable modems and routers, baby monitors, home monitoring systems and pet safety systems. In 2015, Motorola Mobility sold its brand rights for accessories to Binatone, which already was the official licensee for certain home products. This deal includes brand rights for all mobile and car accessories under the Motorola brand. In 2016, Zoom Telephonics was granted the worldwide brand rights for home networking products, including cable modems, routers, Wi-Fi range extenders and related networking products. == Sponsorship == Since 2021 Motorola is the main sponsor of the Milwaukee Bucks of the NBA From 2022, Motorola was the main kit sponsor of Italian football club AC Monza. In September 2024, Motorola became the global smartphone partner of Formula 1 from 2025 onwards, and in the following month Motorola also has inked the same deal with FIFA. Both were made as a part of Lenovo's partnership with Formula 1 and FIFA. == See also == iDEN WiDEN List of electronics brands List of Motorola products List of Illinois companies Motorola Moto Motorola Solutions Telephones portal == References == == External links == Official website | {"page_id": 3144730, "title": "Motorola Mobility"} |
Probe Side-channel Attack DAC ’19, June 2–6, 2019, Las Vegas, NV, USA Figure 7: Victim’s prefetching > (a) Next-line prefetch prediction. (b) Full prefetch prediction. Figure 8: CSV score with prefetch prediction. is trying to infer victim’s access. To measure the effectiveness of the attacker, CSV computes the correlation between attacker’s observa-tion (Figure 7(c), Figure 7(d)) and victim’s access (Figure 7(a)). With the presence of prefetcher, however, the attacker does not directly monitor victim’s memory access. Instead, attacker can only monitor a combination of victim’s memory access and victim’s prefetched memory access (Figure 7(b)). Therefore, computing the correlation between (a),(c) and (a),(d) does not accurately reflect the success in recovering the cache state, as some of the attacker’s observations can only correlate to the victim’s prefetching behavior. To address this issue, for the victim access pattern we include the prefetching behavior based on a model of the prefetcher. A simple model is to assume that the prefetcher will only prefetch next cache line for the victim. A more sophisticated model can use the prefetcher profile (as we carried out in the previous section) to more accurately predict victim’s prefetching behavior. Note that we only modify the computation of the CSV metric, not to the operation of PAPP or traditional prime and probe experiments. ## 4.2 Comparison to Traditional Prime and Probe Figure 8 shows the CSV score with the traditional attack and PAPP. We found that for type (1) victim, the prefetcher indeed only prefetches the next cache line except at the beginning and end of a page. PAPP substantially outperforms traditional prime and probe across all cases: for example, for type(1) workload, PAPP achieve a CSV of 0.81 using the modified next line CSV metric (Figure 8(a)) and even higher with the full prefetch prediction (Figure 8(b)), while traditional | {"source": 2289, "title": "from dpo"} |
Semigroupal is referred to as “monoidal” in the paper.↩ See Rob Norris’ infographic for a the complete picture.↩ Technically this is a warning not an error. It has been promoted to an error in our case because we’re using the -Xfatal-warnings flag on scalac.↩ In Hadoop there is also a shuffle phase that we will ignore here.↩ A closely related library called Spire already provides that abstractions.↩ Underscore Book a Private Course Our Course Directory Our Consulting Services About Us Contact +44 (0)20 309 55332 [email protected] @underscoreuk Copyright 2014-20 Noel Welsh and Dave Gurnell. | {"source": 4183, "title": "from dpo"} |
SO user. 11 Attribute cannot be changed once set to CK_TRUE. It becomes a read only attribute. 12 Attribute cannot be changed once set to CK_FALSE. It becomes a read only attribute. Table 11, Common Object Attributes Attribute Data Type Meaning CKA_CLASS1 CK_OBJECT_CLASS Object class (type) Refer to Table 10 for footnotes The above table defines the attributes common to all objects. 4.3 Hardware Feature Objects 4.3.1 Definitions This section defines the object class CKO_HW_FEATURE for type CK_OBJECT_CLASS as used in the CKA_CLASS attribute of objects. 4.3.2 Overview Hardware feature objects (CKO_HW_FEATURE) represent features of the device. They provide an easily expandable method for introducing new value-based features to the Cryptoki interface. When searching for objects using C_FindObjectsInit and C_FindObjects, hardware feature objects are not returned unless the CKA_CLASS attribute in the template has the value CKO_HW_FEATURE. This protects applications written to previous versions of Cryptoki from finding objects that they do not understand. Table 12, Hardware Feature Common Attributes Attribute Data Type Meaning CKA_HW_FEATURE_TYPE1 CK_HW_FEATURE_TYPE Hardware feature (type) - Refer to Table 10 for footnotes 4.3.3 Clock 4.3.3.1 Definition The CKA_HW_FEATURE_TYPE attribute takes the value CKH_CLOCK of type CK_HW_FEATURE. 4.3.3.2 Description Clock objects represent real-time clocks that exist on the device. This represents the same clock source as the utcTime field in the CK_TOKEN_INFO structure. Table 13, Clock Object Attributes Attribute Data Type Meaning CKA_VALUE CK_CHAR Current time as a character-string of length 16, represented in the format YYYYMMDDhhmmssxx (4 characters for the year; 2 characters each for the month, the day, the hour, the minute, and the second; and 2 additional reserved ‘0’ characters). The CKA_VALUE attribute may be set using the C_SetAttributeValue function if permitted by the device. The session used to set the time MUST be logged in. The device may require the SO to be | {"source": 6021, "title": "from dpo"} |
types of engineering process technologies. In addition to their use in construction, geopolymers are utilized in resins, coatings, and adhesives for aerospace, automotive, and protective applications. == Composition == In the 1950s, Viktor Glukhovsky developed concrete materials originally known as "soil silicate concretes" and "soil cements", but since the introduction of the geopolymer concept by Joseph Davidovits, the terminology and definitions of the word geopolymer have become more diverse and often conflicting. The word geopolymer is sometimes used to refer to naturally occurring organic macromolecules; that sense of the word differs from the now-more-common use of this terminology to discuss inorganic materials which can have either cement-like or ceramic-like character. A geopolymer is essentially a mineral chemical compound or mixture of compounds consisting of repeating units, for example silico-oxide (-Si-O-Si-O-), silico-aluminate (-Si-O-Al-O-), ferro-silico-aluminate (-Fe-O-Si-O-Al-O-) or alumino-phosphate (-Al-O-P-O-), created through a process of geopolymerization. This method of describing mineral synthesis (geosynthesis) was first presented by Davidovits at an IUPAC symposium in 1976. Even within the context of inorganic materials, there exist various definitions of the word geopolymer, which can include a relatively wide variety of low-temperature synthesized solid materials. The most typical geopolymer is generally described as resulting from the reaction between metakaolin (calcined kaolinitic clay) and a solution of sodium or potassium silicate (waterglass). Geopolymerization tends to result in a highly connected, disordered network of negatively charged tetrahedral oxide units balanced by the sodium or potassium ions. In the simplest form, an example chemical formula for a geopolymer can be written as Na2O·Al2O3·nSiO2·wH2O, where n is usually between 2 and 4, and w is around 11-15. Geopolymers can be formulated with a wide variety of substituents in both the framework (silicon, aluminium) and non-framework (sodium) sites; most commonly potassium or calcium takes on the non-framework sites, but iron or phosphorus | {"page_id": 11932146, "title": "Geopolymer"} |
any of the above conformations). 'Coil' is often codified as ' ' (space), C (coil) or '–' (dash). The helices (G, H and I) and sheet conformations are all required to have a reasonable length. This means that 2 adjacent residues in the primary structure must form the same hydrogen bonding pattern. If the helix or sheet hydrogen bonding pattern is too short they are designated as T or B, respectively. Other protein secondary structure assignment categories exist (sharp turns, Omega loops, etc.), but they are less frequently used. Secondary structure is defined by hydrogen bonding, so the exact definition of a hydrogen bond is critical. The standard hydrogen-bond definition for secondary structure is that of DSSP, which is a purely electrostatic model. It assigns charges of ±q1 ≈ 0.42e to the carbonyl carbon and oxygen, respectively, and charges of ±q2 ≈ 0.20e to the amide hydrogen and nitrogen, respectively. The electrostatic energy is E = q 1 q 2 ( 1 r O N + 1 r C H − 1 r O H − 1 r C N ) ⋅ 332 kcal/mol . {\displaystyle E=q_{1}q_{2}\left({\frac {1}{r_{\mathrm {ON} }}}+{\frac {1}{r_{\mathrm {CH} }}}-{\frac {1}{r_{\mathrm {OH} }}}-{\frac {1}{r_{\mathrm {CN} }}}\right)\cdot 332{\text{ kcal/mol}}.} According to DSSP, a hydrogen-bond exists if and only if E is less than −0.5 kcal/mol (−2.1 kJ/mol). Although the DSSP formula is a relatively crude approximation of the physical hydrogen-bond energy, it is generally accepted as a tool for defining secondary structure. === SST classification === SST is a Bayesian method to assign secondary structure to protein coordinate data using the Shannon information criterion of Minimum Message Length (MML) inference. SST treats any assignment of secondary structure as a potential hypothesis that attempts to explain (compress) given protein coordinate data. The core idea is that the best secondary | {"page_id": 28691, "title": "Protein secondary structure"} |
ligand is based on salen and Lewis acid DIBAL is added: == Scope and limitations == === Achiral alkenes === The Simmons–Smith reaction can be used to cyclopropanate simple alkenes without complications. Unfunctionalized achiral alkenes are best cyclopropanated with the Furukawa modification (see below), using Et2Zn and CH2I2 in 1,2-dichloroethane. Cyclopropanation of alkenes activated by electron donating groups proceed rapidly and easily. For example, enol ethers like trimethylsilyloxy-substituted olefins are often used because of the high yields obtained. Despite the electron-withdrawing nature of halides, many vinyl halides are also easily cyclopropanated, yielding fluoro-, bromo-, and iodo-substituted cyclopropanes. The cyclopropanation of N-substituted alkenes is made complicated by N-alkylation as a competing pathway. This can be circumvented by adding a protecting group to nitrogen, however the addition of electron-withdrawing groups decreases the nucleophilicity of the alkene, lowering yield. The use of highly electrophilic reagents such as CHFI2, in place of CH2I2, has been shown to increase yield in these cases. === Polyenes === Without the presence of a directing group on the olefin, very little chemoselectivity is observed. However, an alkene which is significantly more nucleophilic than any others will be highly favored. For example, cyclopropanation occurs highly selectively at enol ethers. === Functional group compatibility === An important aspect of the Simmons–Smith reaction that contributes to its wide usage is its ability to be used in the presence of many functional groups. Among others, the haloalkylzinc-mediated reaction is compatible with alkynes, alcohols, ethers, aldehydes, ketones, carboxylic acids and derivatives, carbonates, sulfones, sulfonates, silanes, and stannanes. However, some side reactions are commonly observed. Most side reactions occur due to the Lewis-acidity of the byproduct, ZnI2. In reactions that produce acid-sensitive products, excess Et2Zn can be added to scavenge the ZnI2 that is formed, forming the less acidic EtZnI. The reaction can also | {"page_id": 1693196, "title": "Simmons–Smith reaction"} |
X by its negative inverse and multiplies the inverse with other elements. If X is multidimensional, the operation involves the inverse of the covariance matrix of X and other multiplications. A swept matrix obtained from a partial sweeping on a subset of variables can be equivalently obtained by a sequence of partial sweepings on each individual variable in the subset and the order of the sequence does not matter. Similarly, a fully swept matrix is the result of partial sweepings on all variables. We can make two observations. First, after the partial sweeping on X, the mean vector and covariance matrix of X are respectively μ 1 ( Σ 11 ) − 1 {\displaystyle \mu _{1}(\Sigma _{11})^{-1}} and − ( Σ 11 ) − 1 {\displaystyle -(\Sigma _{11})^{-1}} , which are the same as that of a full sweeping of the marginal moment matrix of X. Thus, the elements corresponding to X in the above partial sweeping equation represent the marginal distribution of X in potential form. Second, according to statistics, μ 2 − μ 1 ( Σ 11 ) − 1 Σ 12 {\displaystyle \mu _{2}-\mu _{1}(\Sigma _{11})^{-1}\Sigma _{12}} is the conditional mean of Y given X = 0; Σ 22 − Σ 21 ( Σ 11 ) − 1 Σ 12 {\displaystyle \Sigma _{22}-\Sigma _{21}(\Sigma _{11})^{-1}\Sigma _{12}} is the conditional covariance matrix of Y given X = 0; and ( Σ 11 ) − 1 Σ 12 {\displaystyle (\Sigma _{11})^{-1}\Sigma _{12}} is the slope of the regression model of Y on X. Therefore, the elements corresponding to Y indices and the intersection of X and Y in M ( X → , Y ) {\displaystyle M({\vec {X}},Y)} represents the conditional distribution of Y given X = 0. These semantics render the partial sweeping operation a useful method for | {"page_id": 24134105, "title": "Linear belief function"} |
The environmental history of Latin America has become the focus of a number of scholars, starting in the later years of the twentieth century. But historians earlier than that recognized that the environment played a major role in the region's history. Environmental history more generally has developed as a specialized, yet broad and diverse field. According to one assessment of the field, scholars have mainly been concerned with "three categories of research: colonialism, capitalism, and conservation" and the analysis focuses on narratives of environmental decline. There are several currents within the field. One examines humans within particular ecosystems; another concerns humans’ cultural relationship with nature; and environmental politics and policy. General topics that scholars examine are forestry and deforestation; rural landscapes, especially agro-export industries and ranching; conservation of the environment through protected zones, such as parks and preserves; water issues including irrigation, drought, flooding and its control through dams, urban water supply, use, and waste water. The field often classifies research by geographically, temporally, and thematically. Much of the environmental history of Latin America focuses on the nineteenth and twentieth centuries, but there is a growing body of research on the first three centuries (1500-1800) of European impact. As the field established itself as a more defined academic pursuit, the journal Environmental History was founded in 1996, as a joint venture of the Forest History Society and the American Society for Environmental History (ASEH). The Latin American and Caribbean Society for Environmental History (SOLCHA) formed in 2004. Standard reference works for Latin American now include a section on environmental history. == Early scholarship == Works by geographers and other scholars began focusing on humans and the environmental context, especially Carl O. Sauer at University of California, Berkeley. Other early scholars examining humans and nature interactions, such as William Denevan, Julian | {"page_id": 65056762, "title": "Environmental history of Latin America"} |
A camera lucida is an optical device used as a drawing aid by artists and microscopists. It projects an optical superimposition of the subject being viewed onto the surface upon which the artist is drawing. The artist sees both scene and drawing surface simultaneously, as in a photographic double exposure. This allows the artist to duplicate key points of the scene on the drawing surface, thus aiding in the accurate rendering of perspective. == History == The camera lucida was patented in 1806 by the English chemist William Hyde Wollaston. The basic optics were described 200 years earlier by the German astronomer Johannes Kepler in his Dioptrice (1611), but there is no evidence he constructed a working camera lucida. There is also evidence to suggest that the Elizabethan spy Arthur Gregory's 1596 "perspective box" operated on at least highly similar principles to the later camera lucida, but the secretive nature of his work and fear of rivals copying his methods led to his invention becoming lost. By the 19th century, Kepler's description had similarly fallen into oblivion, so Wollaston's claim to have invented the device was never challenged. The term "camera lucida" (Latin 'well-lit room' as opposed to camera obscura 'dark room') is Wollaston's. While on honeymoon in Italy in 1833, the photographic pioneer William Fox Talbot used a camera lucida as a sketching aid. He later wrote that it was a disappointment with his resulting efforts which encouraged him to seek a means to "cause these natural images to imprint themselves durably". In 2001, artist David Hockney's book Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters was met with controversy. His argument, known as the Hockney-Falco thesis, is that the notable transition in style for greater precision and visual realism that occurred around the decade of the | {"page_id": 339562, "title": "Camera lucida"} |
oil, dirt, or scale that may fill a defect or falsely indicate a flaw. Chemical treatment with solvents or reactive agents can be used to rid the surface of undesired contaminants and ensure good penetration when the penetrant is applied. Sometimes also drying at up to 100 °C in the oven and cooling down to 40 °C. Sandblasting to remove paint from a surface prior to the FPI process may mask (smear material over) cracks making the penetrant not effective. Even if the part has already been through a previous FPI operation it is imperative that it is cleaned again. Most penetrants are not compatible and therefore will thwart any attempt to identify defects that are already penetrated by any other penetrant. This process of cleaning is critical because if the surface of the part is not properly prepared to receive the penetrant, defective product may be moved on for further processing. This can cause lost time and money in reworking, over-processing, or even scrapping a finished part at final inspection. === Step 2: Penetrant application === The fluorescent penetrant is applied to the surface and allowed time to seep into flaws or defects in the material. The process of waiting for the penetrant to seep into flaws is called dwell time. Dwell time varies by material, the size of the indications that are intended to be identified and requirements / standards but is generally less than 30 minutes. It requires much less time to penetrate larger flaws because the penetrant is able to soak in much faster. The opposite is true for smaller flaws/defects. === Step 3: Excess penetrant removal === After the identified dwell time has passed, penetrant on the outer surface of the material is then removed. This highly controlled process is necessary in order to ensure | {"page_id": 21087564, "title": "Fluorescent penetrant inspection"} |
Schoonschip was one of the first computer algebra systems, developed in 1963 by Martinus J. G. Veltman, for use in particle physics. "Schoonschip" refers to the Dutch expression "schoon schip maken": to make a clean sweep, to clean/clear things up (literally: to make the ship clean). The name was chosen "among others to annoy everybody, who could not speak Dutch". Veltman initially developed the program to compute the quadrupole moment of the W boson, the computation of which involved "a monstrous expression involving in the order of 50,000 terms in intermediate stages" The initial version, dating to December 1963, ran on an IBM 7094 mainframe. In 1966 it was ported to the CDC 6600 mainframe, and later to most of the rest of Control Data's CDC line. In 1983 it was ported to the Motorola 68000 microprocessor, allowing its use on a number of 68000-based systems running variants of Unix. FORM can be regarded, in a sense, as the successor to Schoonschip. Contacts with Veltman about Schoonschip have been important for Stephen Wolfram in building Mathematica. == See also == Comparison of computer algebra systems == References == == External links == Documentation Schoonschip program files, documentation, and examples == Further reading == Close, Frank (2011) The Infinity Puzzle. Oxford University Press. Describes the historical context of and rationale for 'Schoonschip' (Chapter 11: "And Now I Introduce Mr 't Hooft") | {"page_id": 24536975, "title": "Schoonschip"} |
the evidence is coming from mice, the above scheme represents the events in mice. The completion of the meiosis is simplified here for clarity. Steps 1–4 can be studied in in vitro fertilized embryos, and in differentiating stem cells; X-reactivation happens in the developing embryo, and subsequent (6–7) steps inside the female body, therefore much harder to study. ===== Timing ===== The timing of each process depends on the species, and in many cases the precise time is actively debated. [The whole part of the human timing of X-inactivation in this table is highly questionable and should be removed until properly substantiated by empirical data] ===== Inheritance of inactivation status across cell generations ===== The descendants of each cell which inactivated a particular X chromosome will also inactivate that same chromosome. This phenomenon, which can be observed in the coloration of tortoiseshell cats when females are heterozygous for the X-linked pigment gene, should not be confused with mosaicism, which is a term that specifically refers to differences in the genotype of various cell populations in the same individual; X-inactivation, which is an epigenetic change that results in a different phenotype, is not a change at the genotypic level. For an individual cell or lineage the inactivation is therefore skewed or 'non-random', and this can give rise to mild symptoms in female 'carriers' of X-linked genetic disorders. === Selection of one active X chromosome === Typical females possess two X chromosomes, and in any given cell one chromosome will be active (designated as Xa) and one will be inactive (Xi). However, studies of individuals with extra copies of the X chromosome show that in cells with more than two X chromosomes there is still only one Xa, and all the remaining X chromosomes are inactivated. This indicates that the default state | {"page_id": 1502660, "title": "X-inactivation"} |
Critical radius is the minimum particle size from which an aggregate is thermodynamically stable. In other words, it is the lowest radius formed by atoms or molecules clustering together (in a gas, liquid or solid matrix) before a new phase inclusion (a bubble, a droplet or a solid particle) is viable and begins to grow. Formation of such stable nuclei is called nucleation. At the beginning of the nucleation process, the system finds itself in an initial phase. Afterwards, the formation of aggregates or clusters from the new phase occurs gradually and randomly at the nanoscale. Subsequently, if the process is feasible, the nucleus is formed. Notice that the formation of aggregates is conceivable under specific conditions. When these conditions are not satisfied, a rapid creation-annihilation of aggregates takes place and the nucleation and posterior crystal growth process does not happen. In precipitation models, nucleation is generally a prelude to models of the crystal growth process. Sometimes precipitation is rate-limited by the nucleation process. An example would be when someone takes a cup of superheated water from a microwave and, when jiggling it with a spoon or against the wall of the cup, heterogeneous nucleation occurs and most of water particles convert into steam. If the change in phase forms a crystalline solid in a liquid matrix, the atoms might then form a dendrite. The crystal growth continues in three dimensions, the atoms attaching themselves in certain preferred directions, usually along the axes of a crystal, forming a characteristic tree-like structure of a dendrite. == Mathematical derivation == The critical radius of a system can be determined from its Gibbs free energy. Δ G T = Δ G V + Δ G S {\displaystyle \Delta G_{T}=\Delta G_{V}+\Delta G_{S}} It has two components, the volume energy Δ G V {\displaystyle \Delta | {"page_id": 2331297, "title": "Critical radius"} |
]; and (b) Delete information when no longer needed. Discussion: Retaining information longer than is needed makes the information a potential target for advanced adversaries searching for high value assets to compromise through unauthorized disclosure, unauthorized modification, or exfiltration. For system-related information, unnecessary retention provides advanced adversaries information that can assist in their reconnaissance and lateral movement through the system. Related Controls: None. (3) NON - PERSISTENCE | NON - PERSISTENT CONNECTIVITY Establish connections to the system on demand and terminate connections after [Selection: completion of a request; a period of non-use ]. Discussion: Persistent connections to systems can provide advanced adversaries with paths to move laterally through systems and potentially position themselves closer to high value assets. Limiting the availability of such connections impedes the adversary’s ability to move freely through organizational systems. Related Controls: SC-10 . References: None. SI-15 INFORMATION OUTPUT FILTERING Control: Validate information output from the following software programs and/or applications to ensure that the information is consistent with the expected content: [ Assignment: organization-defined software programs and/or applications ]. Discussion: Certain types of attacks, including SQL injections, produce output results that are unexpected or inconsistent with the output results that would be expected from software programs or applications. Information output filtering focuses on detecting extraneous content, preventing such extraneous content from being displayed, and then alerting monitoring tools that anomalous behavior has been discovered. Related Controls: SI-3 , SI-4 , SI-11 . Control Enhancements: None. References: None. SI-16 MEMORY PROTECTION Control: Implement the following controls to protect the system memory from unauthorized code execution: [ Assignment: organization-defined controls ]. Discussion: Some adversaries launch attacks with the intent of executing code in non-executable regions of memory or in memory locations that are prohibited. Controls employed to protect memory include data execution prevention and address space layout | {"source": 985, "title": "from dpo"} |
whether an MD-sentence is minimized, since there are uncountably many MD-sentences (because there are uncountably many choices for 𝑆 ). Our completeness proof makes use of the following lemmas. Lemma 3.1. Let ; ( 𝜎 1 , … , 𝜎 𝑘 ; 𝑆 ) be the premise of Rule 7. Assume that 𝐺 = { 𝜎 1 , … , 𝜎 𝑘 } is closed under subformulas (so that in particular, every atomic proposition that appears inside a member of 𝐺 is a member of 𝐺 ). Then the conclusion ; ( 𝜎 1 , … , 𝜎 𝑘 ; 𝑆 ′ ) of Rule 7 is minimized. Proof: Let 𝜑 be the conclusion ; ( 𝜎 1 , … , 𝜎 𝑘 ; 𝑆 ′ ) of Rule 7. Assume that ( 𝑠 1 , … , 𝑠 𝑘 ) ∈ 𝑆 ′ . To prove that 𝜑 is minimized, we must show that there is a model 𝑀 of 𝜑 such that for 1 ≤ 𝑖 ≤ 𝑘 , the value of 𝜎 𝑖 in 𝑀 is 𝑠 𝑖 . From the assignment of values to the atomic propositions, as specified by a portion of ( 𝑠 1 , … , 𝑠 𝑘 ) , we obtain our model 𝑀 . For this model 𝑀 , the value of each 𝜎 𝑖 is exactly that specified by ( 𝑠 1 , … , 𝑠 𝑘 ) , as we can see by a simple induction on the structure of formulas. Hence, 𝜑 is minimized. The assumption of closure under subformulas in Lemma 3.1 is needed, as the following example shows. Let 𝛾 be the MD-sentence ; ( 𝜎 1 & 𝜎 2 , 𝜎 1 ∨ ¯ 𝜎 2 ; { ( 0.5 , 0.2 ) } ) | {"source": 2713, "title": "from dpo"} |
is computed. Related to previous work on abstractive summarization []( a contrastive loss function L s i m based on a hinge loss enforces a decrease in semantic similarity with s′ among the predicted interpretations i 1...J′:(3)L s i m=1 J∑j=2 J ℓ(s′,i j′,i j-1′) where(4)ℓ(s′,i j′,i j-1′)=max(0,s i m(i j′,s′)−s i m(i j-1′,s′)+m) with margin _m_ forcing a substantial difference between the generated interpretations. SimCSE and cosine similarity are again used for computing s i m(⋅,⋅). Model parameters are optimized using a weighted combination of the language modeling and similarity decrease objective: L⁎⁎=α L m+(1−α)L s i m. This way, we can explicitly control the trade-off between both losses. **[One2M-Con]** Losses are summed over _N_ training sentences. ### 4.5. Alternative decoding models There are many possibilities of designing a decoder that generates different interpretations during training. We also experimented with models that used context g j as a prompt to autoregressively decode the interpretation both in the one-to-one and one-to-many settings instead of using g j as input together with _s_ and _ti_. In the one-to-many case, this leads to a multi-branch decoder []( with _J_ parallel decoders where each j t h decoder autoregressively generates i j taking g j as decoder prompt. Alternatively, we designed an additional loss function that during training enforces the similarity of a generated interpretation with g j. None of these approaches could improve the results, often yielding nonsensical interpretations that disproportionately focused on g j. 5. Experiments -------------- In this section, we describe the experimental setup and the metrics used for evaluating the performance of the proposed generation frameworks on the IM task. We then continue to thoroughly investigate the success of the frameworks and the importance of IM for content moderation. To that end, we formulate and answer several pertinent research | {"source": 4967, "title": "from dpo"} |
\(q \ne q_\mathsf {reject}\). All transitions \(\delta _i(q, m, a)\) not described below are \(\bot \). **_Transitions from the Root._** At the root node, labeled by \(\text {``}\mathbf{root} \text {''}\), the automaton transitions as follows: $$\begin{aligned} \begin{aligned} \delta _1^\textsf {cc}(q, m, \mathbf{root} ) = {\left\{ \begin{array}{ll} (q, L) \text { if } m = D\\ \texttt {true} \,\, \text { otherwise } \end{array}\right. } \end{aligned} \end{aligned}$$ A two-way tree automaton starts in the configuration where _m_ is set to \(D\). This means that in the very first step the automaton moves to the child node (direction \(L\)). If the automaton visits the root node in a subsequent step (marking the completion of an execution), then all transitions are enabled. **_Transitions from Leaf Nodes._** For a leaf node with label \(a \in \varGamma _0\) and state _q_, the transition of the automaton is \(\delta _0^\textsf {cc}(q, D, a) = (\delta ^\textsf {exec}(q,a), U)\). That is, when the automaton visits a leaf node from the parent, it simulates reading _a_ in !Image 39**_Nodes._** As described earlier, when reading a node labeled by \(\text {``}\mathbf{while} (x\sim y)\text {''}\), where \(\sim \, \in \{=, \ne \}\), the automaton simulates both the possibility of entering the loop body as well as the possibility of exiting the loop. This corresponds to a conjunctive transition: $$\begin{aligned} \delta _1^\textsf {cc}(q, m, \text {``}\mathbf{while} (x\sim y)\text {''})&= (q', L\big ) \wedge \big (q'', U)\\ \textit{where } q'&= \delta ^\textsf {exec}(q, \text {``}\mathbf{assume} (x \sim y)\text {''})\\ \textit{and } q''&= \delta ^\textsf {exec}(q, \text {``}\mathbf{assume} (x \not \sim y)\text {''}) \end{aligned}$$ Above, \(\not \sim \) refers to \(\text {``}=\text {''}\) when \(\sim \) is \(\text {``}\ne \text {''}\), and vice versa. The first conjunct | {"source": 6286, "title": "from dpo"} |
A toxoid is an inactivated toxin (usually an exotoxin) whose toxicity has been suppressed either by chemical (formalin) or heat treatment, while other properties, typically immunogenicity, are maintained. Toxins are secreted by bacteria, whereas toxoids are altered form of toxins; toxoids are not secreted by bacteria. Thus, when used during vaccination, an immune response is mounted and immunological memory is formed against the molecular markers of the toxoid without resulting in toxin-induced illness. Such a preparation is also known as an anatoxin. There are toxoids for prevention of diphtheria, tetanus and botulism. Toxoids are used as vaccines because they induce an immune response to the original toxin or increase the response to another antigen since the toxoid markers and toxin markers are preserved. For example, the tetanus toxoid is derived from the tetanospasmin produced by Clostridium tetani. The latter causes tetanus and is vaccinated against by the DTaP vaccine. While patients may sometimes complain of side effects after a vaccine, these are associated with the process of mounting an immune response and clearing the toxoid, not the direct effects of the toxoid. The toxoid does not have virulence as the toxin did before inactivation. Toxoids are also useful in the production of human antitoxins. Multiple doses of tetanus toxoid are used by many plasma centers in the United States for the development of highly immune persons for the production of human anti-tetanus immune globulin (tetanus immune globulin (TIG), HyperTet (c)), which has replaced horse serum-type tetanus antitoxin in most of the developed world. Toxoids are also used in the production of conjugate vaccines. The highly antigenic toxoids help draw attention to weaker antigens such as polysaccharides found in the bacterial capsule. == List of toxoids == == References == | {"page_id": 1860157, "title": "Toxoid"} |
the temperature and vaporizes. The process, referred to as flash cooling, reduces the risk of thermal damage, inactivates thermophilic microbes due to abruptly falling temperatures, removes some or all of the excess water obtained through the contact with steam, and removes some of the volatile compounds which negatively affect product quality. The cooling rate and quantity of water removed is determined by the level of vacuum, which must be carefully calibrated. === Homogenization === Homogenization is part of the process specifically for milk. Homogenization is a mechanical treatment which results in a reduction of the size, and an increase in the number and total surface area, of fat globules in the milk. This reduces milk's tendency to form cream at the surface, and on contact with containers enhances its stability and makes it more palatable for consumers. == Worldwide use == UHT milk has seen large success in much of Europe, where across the continent seven out of ten people consume it regularly. In countries with a warmer climate such as Spain, UHT milk is preferred due to the high cost of refrigerated transportation and "inefficient cool cabinets". UHT is less popular in Northern Europe and Scandinavia, particularly in Denmark, Finland, Norway, Sweden, the United Kingdom and Ireland. It is also less popular in Greece, where fresh pasteurized milk is the most popular, due to legislation and societal attitudes. While most regular milk sold in the United States is pasteurized, a significant share of organic milk sold in the US is UHT treated (organic milk is produced at fewer locations and consequently spends more time in the supply chain and could therefore spoil before or shortly after being sold if pasteurized). == Effects on quality == === Milk === UHT milk contains the same amount of calories and calcium as | {"page_id": 233884, "title": "Ultra-high-temperature processing"} |
been identified yet. Nevertheless, several aspects have already been identified as central in the process of pollen tube growth. The actin filaments in the cytoskeleton, the peculiar cell wall, secretory vesicle dynamics, and the flux of ions, to name a few, are some of the fundamental features readily identified as crucial, but whose role has not yet been completely elucidated. === DNA repair === During pollen tube growth, DNA damages that arise need to be repaired in order for the male genomic information to be transmitted intact to the next generation. In the plant Cyrtanthus mackenii, bicellular mature pollen contains a generative cell and a vegetative cell. Sperm cells are derived by mitosis of the generative cell during pollen tube elongation. The vegetative cell is responsible for pollen tube development. Double-strand breaks in DNA that arise appear to be efficiently repaired in the generative cell, but not in the vegetative cell, during the transport process to the female gametophyte. == RMD Actin filament organization is a contributor to pollen tube growth == === Overview === In order for fertilization to occur, there is rapid tip growth in pollen tubes which delivers the male gametes into the ovules. A pollen tube consists of three different regions: the apex which is the growth region, the subapex which is the transition region, and the shank which acts like normal plant cells with the specific organelles. The apex region is where tip growth occurs and requires the fusion of secretory vesicles. There is mostly pectin and homogalacturonans (part of the cell wall at the pollen tube tip) inside these vesicles. The pectin in the apex region contains methylesters which allow for flexibility, before the enzyme pectin methylesterase removes the methylester groups allowing calcium to bind between pectins and give structural support. The homogalacturonans accumulate | {"page_id": 325963, "title": "Pollen tube"} |
modern times. The game was originally distributed through the shareware distribution model, allowing players to try a limited part of the game for free but requiring payment to play the rest, and represented one of the first uses of texture mapping graphics in a popular game, along with Ultima Underworld. In December 1992, Computer Gaming World reported that DOS accounted for 82% of computer-game sales in 1991, compared to Macintosh's 8% and Amiga's 5%. In response to a reader's challenge to find a DOS game that played better than the Amiga version the magazine cited Wing Commander and Civilization, and added that "The heavy MS-DOS emphasis in CGW merely reflects the realities of the market". A self-reported Computer Gaming World survey in April 1993 similarly found that 91% of readers primarily used IBM PCs and compatibles for gaming, compared to 6% for Amiga, 3% for Macintosh, and 1% for Atari ST, while a Software Publishers Association study found that 74% of personal computers were IBMs or compatible, 10% Macintosh, 7% Apple II, and 8% other. 51% of IBM or compatible had 386 or faster CPUs. By 1992, DOS games such as Links 386 Pro supported Super VGA graphics. While leading Sega and Nintendo console systems kept their CPU speed at 3–7 MHz, the 486 PC processor ran much faster, allowing it to perform many more calculations per second. The 1993 release of Doom on the PC was a breakthrough in 3D graphics, and was soon ported to various game consoles in a general shift toward greater realism. Computer Gaming World reiterated in 1994, "we have to advise readers who want a machine that will play most of the games to purchase high-end MS-DOS machines". By 1993, PC floppy disk games had a sales volume equivalent to about one-quarter that of | {"page_id": 1336512, "title": "PC game"} |
[1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Levy, Azriel (2002) [First published 1979]. Basic Set Theory (Reprinted ed.). Dover. ISBN 0-486-42079-5. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Steen, Lynn Arthur; Seebach, J. Arthur Jr (1978). Counterexamples in Topology. New York: Springer-Verlag. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition). Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. == External links == Encyclopaedia of Mathematics article on Baire theorem Tao, T. (1 February 2009). "245B, Notes 9: The Baire category theorem and its Banach space consequences". | {"page_id": 55227, "title": "Baire category theorem"} |
the topic, explaining "why astrology doesn't work". === Beam Me Up === Part V, "Beam Me Up", explores additional topics, such as common misconceptions regarding the Hubble Space Telescope and its funding, star-naming companies, and astronomy myths and inaccuracies perpetuated by Hollywood, providing "The Top-Ten Examples of Bad Astronomy in Major Motion Pictures". == Publications == Plait, Philip C. (1 March 2002). Bad Astronomy: Misconceptions and Misuses Revealed, from Astrology to the Moon Landing "Hoax". New York: Wiley. ISBN 978-0-471-40976-2. OCLC 48885221. Bad Astronomy was the first volume in the planned series Bad Science published by John Wiley & Sons. A second volume, Bad Medicine, by Christopher Wanjek, was published in 2003 and was the most recent in the series. In 2008, Plait published a second book on astronomy, Death from the Skies, which explored the various ways in which the human race could be rendered extinct by astronomical phenomena. == See also == Death from the Skies == References == == External links == Bad Astronomy at Open Library Plait's Bad Astronomy blog at Slate.com Sample chapter from publisher. | {"page_id": 8740164, "title": "Bad Astronomy"} |
The Silurian Tuscarora Formation — also known as Tuscarora Sandstone or Tuscarora Quartzite — is a mapped bedrock unit in Pennsylvania, Maryland, West Virginia, and Virginia, US. == Description == The Tuscarora is a thin- to thick-bedded fine-grained to coarse-grained orthoquartzite. It is a white to medium-gray or gray-green subgraywacke, sandstone, siltstone and shale, cross-stratified and conglomeratic conglomerate in parts, containing a few shale interbeds. Details of the type locality and of stratigraphic nomenclature for this unit as used by the U.S. Geological Survey are available on-line at the National Geologic Map Database. The Tuscarora and its lateral equivalents are the primary ridge-formers of the Ridge-and-Valley Appalachians in the eastern United States It is typically 935 feet thick in Pennsylvania, and in Maryland varies from 60 feet to 400 feet thick from east to west. === Notable exposures === The Tuscarora Formation is commonly exposed on various ridge crests and in many water gaps in the Ridge and Valley physiographic province of the Appalachians of Pennsylvania, Maryland, and West Virginia, particularly along the Wills Mountain Anticline. In Pennsylvania, the Tuscarora is exposed along US 30 on the north and south sides of the Narrows in central Bedford County, where it is nearly vertical. It is also well-exposed in the core of Jack's Mountain in Jack's Narrows, where the Juniata River cuts through the mountain, just west of Mount Union. The Standing Stone Trail traverses this cut, and many of the "Thousand Steps" here are Tuscarora quartzite. In Maryland, the National Road (US 40) passes arched Tuscarora sandstone outcrops in the Cumberland Narrows in Allegany County. In West Virginia, the River Knobs along the North Fork of the South Branch of the Potomac River in Pendleton County include dramatic outcrops of nearly vertical Tuscarora sandstone. Some of the better known of | {"page_id": 10354193, "title": "Tuscarora Sandstone"} |
A radar tower is a tower whose function is to support a radar facility, usually a local airport surveillance radar, and hence often at or in the vicinity of an airport or a military air base. The antenna is often continually rotating. In addition, radar towers are used for the installation and operation of search and height finder radars at military radar stations, where the mission is to support air defense missions. These missions were characterized as Aircraft Control & Warning (AC&W), or Long Range Surveillance in support of the Semi-Automatic Ground Environment (SAGE). The tower typically has a continuously rotating parabolic antenna. Often, the antenna is protected from the weather by a radome, and is thus not visible from the outside. For regional air traffic control, en route radar installations are used. The data from these radars is fed into the civilian RADNET system and transferred to all civil and military control centres. Ideally, a radar tower is built on a high spot in the terrain, because this reduces the angle of elevation, and thus increases the range of the radar device. In the absence of a suitable high spot, radar towers are used. Radar towers are also need to provide weather protection and services (air conditioning and power) for the radar equipment, communications, operators and maintainers. In Germany, the operational command posts of the German Air Force use the Bundeswehr radar towers for the stationary radar sites of the operational command areas. In Britain, radar gave them the edge during the Battle of Britain, allowing Britain to detect incoming air raids before they arrived. Military radar stations have supported US and allied air defense operations at numerous worldwide locations since World War II. The largest network of military radar stations evolved during the Cold War era to support | {"page_id": 45259221, "title": "Radar tower"} |
=== Electroluminescence === Electroluminescence is light emission stimulated by electric current. In organic compounds, electroluminescence has been known since the early 1950s, when Bernanose and coworkers first produced electroluminescence in crystalline thin films of acridine orange and quinacrine. In 1960, researchers at Dow Chemical developed AC-driven electroluminescent cells using doping. In some cases, similar light emission is observed when a voltage is applied to a thin layer of a conductive organic polymer film. While electroluminescence was originally mostly of academic interest, the increased conductivity of modern conductive polymers means enough power can be put through the device at low voltages to generate practical amounts of light. This property has led to the development of flat panel displays using organic LEDs, solar panels, and optical amplifiers. === Barriers to applications === Since most conductive polymers require oxidative doping, the properties of the resulting state are crucial. Such materials are salt-like (polymer salt), which makes them less soluble in organic solvents and water and hence harder to process. Furthermore, the charged organic backbone is often unstable towards atmospheric moisture. Improving processability for many polymers requires the introduction of solubilizing substituents, which can further complicate the synthesis. Experimental and theoretical thermodynamical evidence suggests that conductive polymers may even be completely and principally insoluble so that they can only be processed by dispersion. === Trends === Most recent emphasis is on organic light emitting diodes and organic polymer solar cells. The Organic Electronics Association is an international platform to promote applications of organic semiconductors. Conductive polymer products with embedded and improved electromagnetic interference (EMI) and electrostatic discharge (ESD) protection have led to both prototypes and products. For example, Polymer Electronics Research Center at University of Auckland is developing a range of novel DNA sensor technologies based on conducting polymers, photoluminescent polymers and inorganic nanocrystals | {"page_id": 732746, "title": "Conductive polymer"} |
Solar radiation modification (SRM) (or solar geoengineering) is a group of large-scale approaches to reduce global warming by increasing the amount of sunlight that is reflected away from Earth and back to space. It is not intended to replace efforts to reduce greenhouse gas emissions, but rather to complement them as a potential way to limit global warming.: 1489 SRM is a form of geoengineering. The most-researched SRM method is stratospheric aerosol injection (SAI), in which small reflective particles would be introduced into the upper atmosphere to reflect sunlight.: 350 Other approaches include marine cloud brightening (MCB), which would increase the reflectivity of clouds over the oceans, or constructing a space sunshade or a space mirror, to reduce the amount of sunlight reaching earth. Climate models have consistently shown that SRM could reduce global warming and many effects of climate change, including some potential climate tipping points. However, its effects would vary by region and season, and the resulting climate would differ from one that had not experienced warming. Scientific understanding of these regional effects, including potential environmental risks and side effects, remains limited.: 1491–1492 SRM also raises complex political, social, and ethical issues. Some worry that its development could reduce the urgency of cutting emissions. Its relatively low direct costs and technical feasibility suggest that it could, in theory, be deployed unilaterally, prompting concerns about international governance. Currently, no comprehensive global framework exists to regulate SRM research or deployment. Interest in SRM has grown in recent years, driven by continued global warming and slow progress in emissions reductions. This has led to increased scientific research, policy debate, and public discussion, although SRM remains controversial. SRM is also known as sunlight reflection methods, solar climate engineering, albedo modification, and solar radiation management. == Context == The interest in solar radiation | {"page_id": 20694764, "title": "Solar radiation modification"} |
high dimensions, Annals of Statistics 40 (2), 1171–1197. Alizadeh, A., Eisen, M., Davis, R. E., Ma, C., Lossos, I., Rosenwal, A., Boldrick, J., Sabet, H., Tran, T., Yu, X., Pwellm, J., Marti, G., Moore, T., Hudsom, J., Lu, L., Lewis, D., Tibshirani, R., Sherlock, G., Chan, W., Greiner, T., Weisenburger, D., Armitage, K., Levy, R., Wilson, W., Greve, M., Byrd, J., Botstein, D., Brown, P. and Staudt, L. (2000), Iden-tification of molecularly and clinically distinct subtypes of diffuse large b cell lymphoma by gene expression profiling, Nature 403 , 503–511. Alliney, S. and Ruzinsky, S. (1994), An algorithm for the minimization of mixed L1 and L2 norms with application to Bayesian estimation, Trans-actions on Signal Processing 42 (3), 618–627. Amini, A. A. and Wainwright, M. J. (2009), High-dimensional analysis of semdefinite relaxations for sparse principal component analysis, Annals of Statistics 5B , 2877–2921. Anderson, T. (2003), An Introduction to Multivariate Statistical Analysis, 3rd ed. , Wiley, New York. Antoniadis, A. (2007), Wavelet methods in statistics: Some recent develop-ments and their applications, Statistics Surveys 1, 16–55. Bach, F. (2008), Consistency of trace norm minimization, Journal of Machine Learning Research 9, 1019–1048. Bach, F., Jenatton, R., Mairal, J. and Obozinski, G. (2012), Optimization with sparsity-inducing penalties, Foundations and Trends in Machine Learn-ing 4(1), 1–106. Banerjee, O., El Ghaoui, L. and d’Aspremont, A. (2008), Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data, Journal of Machine Learning Research 9, 485–516. > 315 316 BIBLIOGRAPHY Baraniuk, R. G., Davenport, M. A., DeVore, R. A. and Wakin, M. B. (2008), A simple proof of the restricted isometry property for random matrices, Constructive Approximation 28 (3), 253–263. Barlow, R. E., Bartholomew, D., Bremner, J. M. and Brunk, H. D. (1972), Statistical Inference under Order Restrictions: The Theory and Applica-tion of Isotonic Regression | {"source": 1206, "title": "from dpo"} |
privacy-preserving recommender system in presented in the context of electronic commerce. The principle behind their approach is called division of trust principle and can be informally stated as "Trust no one, but you may trust two...". In more detail, this means that customers can trust merchants with information about what they want, but not trust them with personal information related to their identity or their past behavior. However, they may trust someone else with identity related data, but not with information about what they want to purchase. In the case of an electronic commerce recommender system, this translates to trusting merchants with the query to the database of available products and trusting another party (known as the semi-trusted third party ) with demographic information. The components involved in the ALAMBIC architecture are: 1. The merchant platform — The merchant’s computing platform is in charge of the catalogue of available products and maintains a database for storing encrypted customer profiles and rating tables. Even though the merchant platform holds this encrypted customer data, it can not interpret it without the help of the Alambic agent .2. The Still Maker — This is in fact the semi-trusted third party. Its purpose is to generate a separate Alambic agent for each supported merchant platform. 3. The customer — wishes to receive product recommendations from the mer-chant. 4. The Alambic agent — serves as an intermediary between the merchant and the customer. The Still Maker generates the Alambic agent which is then deployed as his surrogate within the merchant platform where he performs the necessary tasks for the recommendation process. 2.3. PRIVACY AND TRUST IN RECOMMENDER SYSTEMS 15 Privacy Inference Model A model for inferring privacy from trust is suggested by with the aim of pro-tecting profile content against exposure to low trusted | {"source": 3686, "title": "from dpo"} |
NMZK \emph{proof} without any set-up assumptions. Our protocol requires $O(n)$ rounds assuming one-way functions, or $\tilde{O}(\log n)$ rounds assuming collision-resistant hash functions. **Equivalence of Uniform Key Agreement and Composition Insecurity** _Chongwon Cho and Chen-Kuei Lee and Rafail Ostrovsky_ It is well known that proving the security of a key agreement protocol (even in a special case where the protocol transcript looks random to an outside observer) is at least as difficult as proving $P \not = NP$. Another (seemingly unrelated) statement in cryptography is the existence of two or more non-adaptively secure pseudo-random functions that do not become adaptively secure under sequential or parallel composition. In 2006, Pietrzak showed that {\em at least one} of these two seemingly unrelated statements is true. In other words, the existence of key agreement or the existence of the adaptively insecure composition of non-adaptively secure functions is true. Pietrzak's result was significant since it showed a surprising connection between the worlds of public-key (i.e., "cryptomania") and private-key cryptography (i.e., "minicrypt"). In this paper we show that this duality is far stronger: we show that {\em at least one} of these two statements must also be false. In other words, we show their {\em equivalence}. More specifically, Pietrzak's paper shows that if sequential composition of two non-adaptively secure pseudo-random functions is not adaptively secure, then there exists a key agreement protocol. However, Pietrzak's construction implies a slightly stronger fact: If sequential composition does not imply adaptive security (in the above sense), then a {\em uniform-transcript} key agreement protocol exists, where by uniform-transcript we mean a key agreement protocol where the transcript of the protocol execution is indistinguishable from uniform to eavesdroppers. In this paper, we complete the picture, and show the reverse direction as well as a strong equivalence between these two notions. More specifically, | {"source": 5726, "title": "from dpo"} |
Title: 403 Forbidden URL Source: Warning: Target URL returned error 403: Forbidden Markdown Content: 403 Forbidden =============== 403 Forbidden ============= | {"source": 4964, "title": "from dpo"} |
high boiling points, reducing the probability that the coolant can boil, which could lead to a loss-of-coolant accident. Low vapor pressure enables operation at near-ambient pressure, further dramatically reducing the probability of an accident. Some designs immerse the entire core and heat exchangers into a pool of coolant, virtually eliminating the risk that inner-loop cooling will be lost. === Mercury === Clementine was the first liquid metal cooled nuclear reactor and used mercury coolant, thought to be the obvious choice since it is liquid at room temperature. However, because of disadvantages including high toxicity, high vapor pressure even at room temperature, low boiling point producing noxious fumes when heated, relatively low thermal conductivity, and a high neutron cross-section, it has fallen out of favor. === Sodium and NaK === Sodium and NaK (a eutectic sodium-potassium alloy) do not corrode steel to any significant degree and are compatible with many nuclear fuels, allowing for a wide choice of structural materials. NaK was used as the coolant in the first breeder reactor prototype, the Experimental Breeder Reactor-1, in 1951. Sodium and NaK do, however, ignite spontaneously on contact with air and react violently with water, producing hydrogen gas. This was the case at the Monju Nuclear Power Plant in a 1995 accident and fire. Sodium is also the coolant used in the Russian BN reactor series and the Chinese CFR series in commercial operation today. Neutron activation of sodium also causes these liquids to become intensely radioactive during operation, though the half-life is short and therefore their radioactivity does not pose an additional disposal concern. There are two proposals for a sodium cooled Gen IV LMFR, one based on oxide fuel, the other on the metal-fueled integral fast reactor. === Lead === Lead has excellent neutron properties (reflection, low absorption) and is | {"page_id": 4608787, "title": "Liquid metal cooled reactor"} |
to food allergies === Diabetes === Diabetes mellitus is a disease in which one's blood sugar levels are elevated. There are two forms of diabetes: Type 1 diabetes and Type 2 diabetes. Type 1 is caused by the immune system attacking insulin-producing cells in the pancreas. Type 2 is caused by the underproduction of insulin and the cells in your body becoming resistant to insulin. A low-glycemic diet that is high in fiber is recommended for diabetics because low-glycemic foods digest slower in the body. Slower digestion helps stabilize blood glucose levels and prevents spikes in blood sugar. === Cancer === Cancer is a disease with multifactorial causes. Cigarette smoking, physical activity, viruses, and diet play a role in the development of cancer. Poor diet has been linked to the development of cancer, while a healthy diet has been shown to have positive effects on preventing and treating cancer. Cruciferous vegetables contain chemicals called Isothiocyanates (ITC's). ITC's have immune-boosting effects, as well as anti-cancer activity such as the prevention of angiogenesis. Angiogenesis is a process where tumors have their own blood supply in order to feed growing cancer cells. The alliinase containing food group, allium, has anti-cancer and anti-inflammatory properties. Alliinase is an enzyme, which acts as an angiogenisis-inhibitor and a carcinogen detoxifier. Mushrooms reduce cancer cell and tumor growth and prevent DNA damage. Mushrooms have aromatase inhibitors that decrease the levels of estrogen released in the bloodstream, slowing the production of breast tissue. Fruits and vegetables contain flavonoids, which are anti-carcinogens. == Macronutrients == Macronutrients are a class of nutrients that the human body needs in larger amounts in order to function properly and the three main classes of macronutrients include: proteins, carbohydrates, and fats (lipids). The main role of macronutrients besides to make sure the body functions properly | {"page_id": 61977673, "title": "Nutritional immunology"} |
French around 1790, which in turn came from Middle Dutch dūne. == Formation == A universally precise distinction does not exist between ripples, dunes, and draas, which are all deposits of the same type of materials. Dunes are generally defined as greater than 7 cm tall and may have ripples, while ripples are deposits that are less than 3 cm tall. A draa is a very large aeolian landform, with a length of several kilometers and a height of tens to hundreds of meters, and which may have superimposed dunes. Dunes are made of sand-sized particles, and may consist of quartz, calcium carbonate, snow, gypsum, or other materials. The upwind/upstream/upcurrent side of the dune is called the stoss side; the downflow side is called the lee side. Sand is pushed (creep) or bounces (saltation) up the stoss side, and slides down the lee side. A side of a dune that the sand has slid down is called a slip face (or slipface). The Bagnold formula gives the speed at which particles can be transported. == Aeolian dunes == === Aeolian dune shapes === Five basic dune types are recognized: crescentic, linear, star, dome, and parabolic. Dune areas may occur in three forms: simple (isolated dunes of basic type), compound (larger dunes on which smaller dunes of same type form), and complex (combinations of different types). ==== Barchan or crescentic ==== Barchan dunes are crescent-shaped mounds which are generally wider than they are long. The lee-side slipfaces are on the concave sides of the dunes. These dunes form under winds that blow consistently from one direction (unimodal winds). They form separate crescents when the sand supply is comparatively small. When the sand supply is greater, they may merge into barchanoid ridges, and then transverse dunes (see below). Some types of crescentic | {"page_id": 7890, "title": "Dune"} |
of the Variscan mountains to only a few hundred meters above sea level. Embayments of the Tethys Ocean flooded the edge of the South Alpine and Austroalpine superunits, leaving fossil-rich limestones as well as salt and gypsum deposits. The region experienced a dry climate akin to modern-day Arabia, before the Permian-Triassic mass extinction. Amidst the slower tectonic activity of the Triassic, continental sediments formed dark limestones in a poorly ventilated offshore marine environment. The ocean expanded inland, creating a large shelf environment and improved water circulation from the Tethys Ocean generated a large reef belt. Volcanic activity picked up 230 million years ago forming the South Tyrolean Dolomites and depositing ash fall on the Northern Calcareous Alps. Sandstone, gypsum and mudstone formed in near shore areas of the Sub-Pennic and Helvetic superunits. At a few spots, such as Lunz am See sand and claystones interbedded with coal. The famous Halstatt Limestone, well known for ammonite fossils, was deposited in warm shallow water less than 100 meters deep. The Meliatic Superunit, on the other hand, preserves deep sea siliceous material formed 4000 meters below the surface of the Tethys Ocean. === Breakup of Pangaea: Jurassic-Cretaceous === Rifts began to form within Pangaea opening the Atlantic Ocean. At first, the rifts filled with continental sediments. The Tarntal Breccia in the Austroalpine Superunit preserves tectonic rock fracturing. A fault opened to the west, forming the Penninic Ocean. Deep plumes of mantle rock reached the surface, interacting with seawater to become serpentinite. Central Europe formed the northwestern margin of the Penninic Ocean. Permian and Triassic sediments emerged above sea level on the Moldanubian, Moravian, Helvetic and Sub-Penninc superunits. As the Tethys Ocean crust began to subduct, only small fragments of its crust remained in the Meliatic Superunit in the Eastern Alps. The Tethys Ocean | {"page_id": 58231555, "title": "Geology of Austria"} |
HD 61330 (f Puppis) is a class B8IV (blue subgiant) star in the constellation Puppis. Its apparent magnitude is 4.53 and it is approximately 360 light years away based on parallax. It is a multiple star, with a secondary component C, with magnitude 6.07 in an 81-year orbit with eccentricity 0.64. Another closer component, B, has been reported at 6.1 magnitude and 0.1" separation, but subsequent observers have repeatedly failed to confirm it. == References == | {"page_id": 38339049, "title": "HD 61330"} |
was awarded the John von Neumann Theory Prize for his discovery of non-cooperative equilibria, now called Nash Equilibria. He won the Leroy P. Steele Prize in 1999. In 1994, he received the Nobel Memorial Prize in Economic Sciences (along with John Harsanyi and Reinhard Selten) for his game theory work as a Princeton graduate student. In the late 1980s, Nash had begun to use email to gradually link with working mathematicians who realized that he was the John Nash and that his new work had value. They formed part of the nucleus of a group that contacted the Bank of Sweden's Nobel award committee and were able to vouch for Nash's mental health and ability to receive the award. Nash's later work involved ventures in advanced game theory, including partial agency, which show that, as in his early career, he preferred to select his own path and problems. Between 1945 and 1996, he published 23 scientific papers. Nash has suggested hypotheses on mental illness. He has compared not thinking in an acceptable manner, or being "insane" and not fitting into a usual social function, to being "on strike" from an economic point of view. He advanced views in evolutionary psychology about the potential benefits of apparently nonstandard behaviors or roles. Nash criticized Keynesian ideas of monetary economics which allowed for a central bank to implement monetary policies. He proposed a standard of "Ideal Money" pegged to an "industrial consumption price index" which was more stable than "bad money." He noted that his thinking on money and the function of monetary authority paralleled that of economist Friedrich Hayek. Nash received an honorary degree, Doctor of Science and Technology, from Carnegie Mellon University in 1999, an honorary degree in economics from the University of Naples Federico II in 2003, an honorary doctorate | {"page_id": 102567, "title": "John Forbes Nash Jr."} |
The Society of Engineers was a British learned society established in 1854. It was the first society to issue the professional title of Incorporated Engineer. It merged with the Institution of Incorporated Engineers (IIE) in 2005, and in 2006 the merged body joined with the Institution of Electrical Engineers to become the Institution of Engineering and Technology. == History == === Establishment === Established in May 1854 in The Strand, London, the Society of Engineers was one of the oldest professional engineering bodies in the United Kingdom (after the Smeatonian Society of Civil Engineers, 1771, the Institution of Civil Engineers, 1818, and the Institution of Mechanical Engineers, 1847) It promoted the interests of members worldwide and was concerned with all branches of engineering. It was founded by Henry Palfrey Stephenson and Robert Monro Christie as a means of reunion for former students of Putney College (the short-lived College for Civil Engineers, 1839–c.1851) — one of few institutions then giving technical and scientific training for engineers — with Stephenson serving as president in 1856 and 1859. === Timeline === 1839 – College for Civil Engineers founded 1854 – Society of Engineers (SoE) founded 1884 – Junior Institution of Engineers founded 1976 – Junior Institution of Engineers renamed the Institution of Mechanical & General Technician Engineers (IMGTechE) Early 20th century – Association of Supervisory Electrical Engineers (ASEE) founded 1928 – Cumann na nInnealtoiri (The Engineers Society) is founded in Ireland Early 20th century – Institute of Engineers and Technicians (IET) founded Mid 20th century – Institution of Incorporated Executive Engineers (IIExE) founded Mid 20th century – The Institution of Electronics and Radio Engineers (IERE) founded 1965 – Institution of Electrical and Electronics Technician Engineers (IEETE) founded, incorporating ASEE (with support from the IEE) 1965 – The Society of Electronics and Radio Technicians | {"page_id": 8271284, "title": "Society of Engineers (United Kingdom)"} |
In celestial mechanics, true anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit. It is the angle between the direction of periapsis and the current position of the body, as seen from the main focus of the ellipse (the point around which the object orbits). The true anomaly is usually denoted by the Greek letters ν or θ, or the Latin letter f, and is usually restricted to the range 0–360° (0–2π rad). The true anomaly f is one of three angular parameters (anomalies) that can be used to define a position along an orbit, the other three being the eccentric anomaly and the mean anomaly. == Formulas == === From state vectors === For elliptic orbits, the true anomaly ν can be calculated from orbital state vectors as: ν = arccos e ⋅ r | e | | r | {\displaystyle \nu =\arccos {{\mathbf {e} \cdot \mathbf {r} } \over {\mathbf {\left|e\right|} \mathbf {\left|r\right|} }}} (if r ⋅ v < 0 then replace ν by 2π − ν) where: v is the orbital velocity vector of the orbiting body, e is the eccentricity vector, r is the orbital position vector (segment FP in the figure) of the orbiting body. ==== Circular orbit ==== For circular orbits the true anomaly is undefined, because circular orbits do not have a uniquely determined periapsis. Instead the argument of latitude u is used: u = arccos n ⋅ r | n | | r | {\displaystyle u=\arccos {{\mathbf {n} \cdot \mathbf {r} } \over {\mathbf {\left|n\right|} \mathbf {\left|r\right|} }}} (if rz < 0 then replace u by 2π − u) where: n is a vector pointing towards the ascending node (i.e. the z-component of n is zero). rz is the z-component of the | {"page_id": 969603, "title": "True anomaly"} |
will interpret dozens or hundreds of MB of data in such a tag. The ease of data loss and undefined nature of storing arbitrary data in an id3 tag makes the concept ill-suited for this purpose, and rather something of a gross hack which only happens to work as intended due to luck (if the backwards compatibility does not fail catastrophically due to undefined behavior) or specialized software that knows exactly what mp3HD is. The mp3HD files were one big file (instead of WavPack's lossy plus correction-file method) whether they play back as mp3HD or not. As of 2023, most platforms play FLAC files, and it is always lossless. == Products that support mp3HD == Hardware Samsung IceTouch (YP-H1), Samsung announced that they will be releasing the first mp3 player capable of playing mp3HD lossless part of the format at CES 2010. They were supposed to be released sometime in 2010, but as of April 8, 2011 nothing has been released. Samsung YP-Z3 is the world first mp3 player released to support mp3HD (released end of August 2011) Samsung YP-R2 (released in November 2011) Software Winamp w/ plugin (Windows only) Windows Media Player w/ direct show filter mp3/HD/surround/SX player (Windows and Mac) == Alternative technologies == Lossless FLAC WavPack Monkey's Audio ALAC TTA == See also == Lossless Compression Lossy Compression Audio Compression == References == == External links == Technical specification Download | {"page_id": 28795194, "title": "Mp3HD"} |
the political nature and sensitivity of evaluating government's policies. The difficulties of policy evaluation also apply to environmental policies. Also there, policy evaluation is often approached in simple terms based on the extent to which the stated goals of a policy have been achieved or not ("success or failure"). However, as many environmental policy analysts have pointed out, many other aspects of environmental policy are important. These include the goals and objectives of the policies (which may be deemed too vague, inadequate, poorly or wrongly targeted), their distributional effects (whether they contribute to or reduce environmental and social injustice), the kind of instruments used (for instance, their ethical and political dimensions), the processes by which policies have been developed (public participation and deliberation), and the extent to which they are institutionally supported. === Policy integration === The concept of policy integration has been discussed since the 1980s under various terms, such as policy mainstreaming, policy coordination, and holistic governance. In the environmental field, it is often called environmental policy integration. The main idea is that policies in one area (or domain) should consider their effects on other areas. If this is not done properly then policies coming from different domains or organizations could interfere with each other. Policy integration can apply to various aspects, such as policy goals, procedures, tools, and outcomes. Many environmental thinkers and policy analysts have pointed out that addressing environmental problems effectively requires an integrated approach. As the environment is an integrated whole or system, environmental policies need to take account of the interactions within that system and the effects of human actions and interventions not just on a problem in isolation, but also their (potential) effects of other problems. More often than not, fragmented policies and "solutions", for instance, to combat pollution, lead to the | {"page_id": 3407706, "title": "Environmental policy"} |
Algorithms Dryja, M.; Widlund, O. Abstract | PDF Ph.D. Thesis 1992 Complexity Issues in Computational Algebra Gallo, Giovanni Abstract | PDF TR1992-607 1992 GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems Greenbaum, A.; Trefethen, L. Abstract | PDF TR1992-608 1992 Matrices that Generate the Same Krylov Residual Spaces Greenbaum, A.; Strakos, Z. Abstract | PDF Ph.D. Thesis 1992 Typing Higher-Order Functions with Dynamic Dispatching Hsieh, Chih-Hung Abstract | PDF Ph.D. Thesis 1992 Computer Simulation of Cortical Polymaps Landau, Pierre Abstract | PDF Ph.D. Thesis 1992 Polymorphic Type Inference and Abstract Data Types Laufer, Konstantin Abstract | PDF Ph.D. Thesis 1992 A sublanguage based medical language processing system for German Oliver, Neil Abstract | PDF Ph.D. Thesis 1992 Image Processing, Pattern Recognition and Attentional Algorithms in a Space-Variant Active Vision System Ong, Ping-Wen Abstract | PDF Ph.D. Thesis 1992 On Compiling Regular Loops for Efficient Parallel Execution Ouyang, Pei Abstract | PDF TR1992-597 1992 Semantic Analyses for Storage Management Optimizations in Functional Language Implementations Park, G. Abstract | PDF TR1992-616 1992 Domain Decomposition Algorithms for the P-Version Finite Element Method for Elliptic Problems Pavarino, L. Abstract | PDF TR1992-614 1992 Some Schwarz Algorithms for the P-Version Finite Element Method Pavarino, L. Abstract | PDF Ph.D. Thesis 1992 Japanese/English Machine Translation Using Sublanguage Patterns and Reversible Grammars Peng, Ping Abstract | PDF Ph.D. Thesis 1992 The Analysis and Generation of Tests for Programming Language Translators Rennels, Deborah Abstract | PDF Ph.D. Thesis 1992 Massively Parallel Bayesian Object Recognition Rigoutsos, Isidore Abstract | PDF Ph.D. Thesis 1992 Control of a Dexterous Robot Hand: Theory, Implementation, and Experiments Silver, Naomi Abstract | PDF Ph.D. Thesis 1992 Executable Operational Semantics of Programming Languages Siritzky, Brian Abstract | PDF Ph.D. Thesis 1992 Non-Correcting Error Recovery For LR Parsers Snyder, Kirk Abstract | PDF Ph.D. Thesis 1992 Global | {"source": 2293, "title": "from dpo"} |
(b) Figure 2.5: The Schönhardt polyhedron cannot be subdivided into tetrahedra without adding new vertices. ca ′ in Figure 2.5b—are now epigonals, that is, they lie in the exterior of the polyhe-dron. Since these epigonals are the only edges between vertices that are not part of the polyhedron, there is no way to add edges to form a tetrahedron for a subdivision. Clearly the polyhedron is not a tetrahedron by itself, and so we conclude that it does not admit a subdivision into tetrahedra without adding new vertices. If adding new vertices—so-called Steiner vertices—is allowed, then there is no problem to construct a tetrahedralization, and this holds true in general. Algorithms. Knowing that a triangulation exists is nice, but it is much better to know that it can also be constructed efficiently. Exercise 2.19 Convert Theorem 2.13 into an O(n2) time algorithm to construct a triangulation for a given simple polygon on n vertices. The runtime achieved by the straightforward application of Theorem 2.13 is not optimal. We will revisit this question at several times during this course and discuss improved algorithms for the problem of triangulating a simple polygon. > 3These “nice” subdivisions can be defined in an abstract combinatorial setting, where they are called > simplicial complices . 18 CG 2012 2.3. The Art Gallery Problem The best (in terms of worst-case runtime) algorithm known due to Chazelle com-putes a triangulation in linear time. But this algorithm is very complicated and we will not discuss it here. There is also a somewhat simpler randomized algorithm to compute a triangulation in expected linear time , which we will not discuss in detail, either. Instead you will later see a much simpler algorithm with a pretty-close-to linear runtime bound. The question of whether there exists a simple (which is | {"source": 4191, "title": "from dpo"} |
functionality offered by backup software vendors I have evaluated, I couldn't determine if it would really work (and I chose not to reverse-engineer the software to find an answer). In my job with the Secure Windows® Initiative team at Microsoft, I have long advocated that developers who design and implement logic involving cryptographic algorithms should first prove their ability to write correct cryptographic code. What follows is an incomplete list of potential problems I can see with improperly implemented backup encryption schemes. Danger #1 Using Weak Cryptography An obvious problem with a weak algorithm, such as 40-bit RC4, is that it can be cracked by brute force in about one day on modern hardware, and that's without any fancy optimizations. Naturally, I would like to use the strongest possible cryptographic algorithm, such as 256-bit Advanced Encryption Standard (AES), and (one can dream!) that the software will seamlessly switch to even stronger cryptography as it becomes available in the future. Danger #2 Using Strong Cryptography Incorrectly I am guessing that performing an incremental backup of a large set of files would require the backup software to access the contents of the existing backup in something other than sequential fashion. One easy (and even stupid) way of allowing random-access reading and writing of encrypted data is by turning off a default feature of most block ciphers called Cipher Block Chaining (CBC). This insecure mode, called Electronic Code Book (ECB), is illustrated in **Figure 2**. As you can see, the middle picture, encrypted with ECB, does not conceal much. I would expect a better grade of protection from my backup software!  to be packaged. This is done through a rigorous process of developing, testing, adjusting and retesting each MA\MH product both in the lab and in commercial trials. This meticulous process is repeated for each combination of produce and packaging type because of the many factors involved in developing a successful MA/MH packaging. These factors are “storage and shipment temperature, product respiration rate and quotient, response to levels of CO2, O2 and humidity, and product weight. Hence, film packaging that is adequate for consumer packages is not always suitable for bulk packaging and vice versa.” == See also == Active packaging Oxygen absorber Shelf life Permeation Cold chain == Sources == == References == Fonseca, Jorge M.; High Relative Humidity; Fresh Americas, #1, 2008 Series, Master Media Worldwide Publishing Devon Zagory, Devon Zagory & Associates, University of California, Davis; Advances In Modified Atmosphere Packaging (MAP) of Fresh Produce; Perishables Handling Newsletter Issue No. 90, May 1997, pages 2–3 N. Aharoni, V. Rodov, E. Fallik, R. Porat, E. Pesis and S. Lurie; Department of Postharvest Science of Fresh Produce, ARO, The Volcani Center; Humidity Improves Efficacy of Modified Atmosphere Packaging of Fruits and Vegetables Adel A. Kader, Dept. of Pomology University of California, Davis; Modified Atmosphere Packaging of Fresh Produce; Outlook Second Quarter, Volume 13, No. 2, 1986 Stephen R. Harris; Storage of fresh produce Food and Agriculture Organization of the United Nations; Production is only Half the Battle – A training manual in fresh produce marketing for the Eastern Caribbean, Chapter 8: Storage of fresh produce; Bridgetown, Barbados, December 1988 | {"page_id": 22221728, "title": "Modified atmosphere/modified humidity packaging"} |
NBPH had not "provided any public updates", or "held information sessions in the affected communities or elsewhere, and has not issued any news releases regarding the mystery disease cases. It also has not said where, specifically, any of the cases were identified." As of 7 April 2021, there was a cluster of 44 possible cases and six deaths. Sutherland interviewed the multiple families who shared their frustration about the "wall of silence" on the "mystery brain disease". Joanne Graves, the daughter of an affected woman, was "infuriated" at the "lack of transparency on the issue and by the fact that it did not disclose the cluster until after the memo was leaked". COVID-19 has slowed the work of scientists and doctors investigating the new disease, according to a 12 May 2021 article in The Washington Post. In his interview with the Post, Marrero said that "diagnostic imaging and spinal taps" – used to diagnose degenerative neurological syndromes – had been cancelled because of the pandemic. The New York Times 4 June article on the "mysterious brain syndrome" also reported on autopsy results of three of the eight cases. Following the revelation about the results of the autopsy reports, and the release of the first NBPH report, CBC published a series of articles saying that the cases may have been misdiagnosed. The Walrus published an in-depth story on 22 October raising concerns that Shephard had allegedly shut out federal experts from the investigation into the disease in early June. In an interview with a senior scientist on the syndrome investigative team, who asked for anonymity, they said that in early June, the "highest levels of the New Brunswick government" had taken control of the investigation and had told the federal authorities to stand down. Experts who were equipped with a "capable | {"page_id": 67334510, "title": "New Brunswick neurological syndrome of unknown cause"} |
chemical cycles on Mars in the past, however the faint young Sun paradox has proved problematic in determining chemical cycles involved in early climate models of the planet. == Jupiter == Jupiter, like all the gas giants, has an atmospheric methane cycle. Recent studies indicate a hydrological cycle of water-ammonia vastly different to the type operating on terrestrial planets like Earth and also a cycle of hydrogen sulfide. Significant chemical cycles exist on Jupiter's moons. Recent evidence points to Europa possessing several active cycles, most notably a water cycle. Other studies suggest an oxygen and radiation induced carbon dioxide cycle. Io and Europa, appear to have radiolytic sulphur cycles involving their lithospheres. In addition, Europa is thought to have a sulfur dioxide cycle. In addition, the Io plasma torus contributes to a sulphur cycle on Jupiter and Ganymede. Studies also imply active oxygen cycles on Ganymede and oxygen and radiolytic carbon dioxide cycles on Callisto. == Saturn == In addition to Saturn's methane cycle some studies suggest an ammonia cycle induced by photolysis similar to Jupiter's. The cycles of its moons are of particular interest. Observations by Cassini–Huygens of Titan's atmosphere and interactions with its liquid mantle give rise to several active chemical cycles including a methane, hydrocarbon, hydrogen, and carbon cycles. Enceladus has an active hydrological, silicate and possibly a nitrogen cycle. == Uranus == Uranus has an active methane cycle. Methane is converted to hydrocarbons through photolysis which condenses and as they are heated, release methane which rises to the upper atmosphere. Studies by Grundy et al. (2006) indicate active carbon cycles operates on Titania, Umbriel and Ariel and Oberon through the ongoing sublimation and deposition of carbon dioxide, though some is lost to space over long periods of time. == Neptune == Neptune's internal heat and convection | {"page_id": 49731496, "title": "Chemical cycling"} |
v = ∂ v i ∂ q i + 1 2 g ∂ g ∂ g m i ∂ g i m ∂ q ℓ v ℓ = ∂ v i ∂ q i + 1 2 g ∂ g ∂ q ℓ v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial g_{mi}}}~{\frac {\partial g_{im}}{\partial q^{\ell }}}~v^{\ell }={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial q^{\ell }}}~v^{\ell }} A little manipulation leads to the more compact form ∇ ⋅ v = 1 g ∂ ∂ q i ( v i g ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}(v^{i}~{\sqrt {g}})} ==== Second-order tensor field ==== The divergence of a second-order tensor field is defined using ( ∇ ⋅ S ) ⋅ a = ∇ ⋅ ( S a ) {\displaystyle ({\boldsymbol {\nabla }}\cdot {\boldsymbol {S}})\cdot \mathbf {a} ={\boldsymbol {\nabla }}\cdot ({\boldsymbol {S}}\mathbf {a} )} where a {\displaystyle \mathbf {a} } is an arbitrary constant vector. In curvilinear coordinates, ∇ ⋅ S = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] g i k b j = [ ∂ S i j ∂ q i + Γ i l i S l j + Γ i l j S i l ] b j = [ ∂ S j i ∂ q i + Γ i l i S j l − Γ i j l S l i ] b j = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] g i k b j {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left[{\cfrac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~g^{ik}~\mathbf | {"page_id": 35669023, "title": "Tensors in curvilinear coordinates"} |
induce the production of chemokines that would direct the migration of maturing dendritic cells to lymph nodes. Notably, LAG-3-matured dendritic cells were upregulated for CCR7. Later, the same authors showed that soluble LAG-3 could reduce the differentiation of macrophages and dendritic cells from monocytes, suggesting that the positive effect of LAG-3 as a dendritic cell-activator applied to pre-existing dendritic cells. A March 2003 paper in Cancer Research from scientists at the University of Turin, which included Triebel as a co-author, showed that, in mice, efti could potentiate a DNA vaccine targeting HER2 in a spontaneous breast cancer model. A March 2006 online paper in Vaccine showed, in animal models, that efti could immuno-potentiate therapeutic vaccines by inducing dendritic cell maturation. An April 2006 paper in Cancer Research showed, in vitro, showed that efti could induce an antigen-specific CD8+ T-cell response in human PBMCs – evidenced by the upregulation of T cells that displayed cytotoxic activity and produced Tc1 cytokines. The investigators for this work used influenza matrix protein antigen and the tumor antigens Melan-A/MART-1 and survivin to verify this CD8+ T cell response. They found that a LAG-3-related adjuvant effect depended on direct activation of antigen-presenting cells. For this paper Triebel collaborated with scientists at the Instituto Nazionale dei Tumori in Milan, Italy. A September 2007 paper in the Journal of Immunology, showed that efti could induce the activation of a large range of human effector T cells, resulting in the production of IFN-γ and TNF-α, among other cytokines. The investigators found that effector and effector-memory, but not naïve or central memory T cells, were induced by efti to a full Tc1 response. In their in vitro work with human blood samples the investigators found that efti bound all the circulating dendritic cells and a fraction of MHC class II+ | {"page_id": 46557481, "title": "Eftilagimod alpha"} |
terms (involution method). Consider the Ferrers diagram of any partition of n into distinct parts. For example, the diagram below shows n = 20 and the partition 20 = 7 + 6 + 4 + 3. Let m be the number of elements in the smallest row of the diagram (m = 3 in the above example). Let s be the number of elements in the rightmost 45 degree line of the diagram (s = 2 dots in red above, since 7 − 1 = 6, but 6 − 1 > 4). If m > s, take the rightmost 45-degree line and move it to form a new row, as in the matching diagram below. If m ≤ s (as in our newly formed diagram where m = 2, s = 5) we may reverse the process by moving the bottom row to form a new 45 degree line (adding 1 element to each of the first m rows), taking us back to the first diagram. A bit of thought shows that this process always changes the parity of the number of rows, and applying the process twice brings us back to the original diagram. This enables us to pair off Ferrers diagrams contributing 1 and −1 to the xn term of the series, resulting in a net coefficient of 0 for xn. This holds for every term except when the process cannot be performed on every Ferrers diagram with n dots. There are two such cases: 1) m = s and the rightmost diagonal and bottom row meet. For example, Attempting to perform the operation would lead us to: which fails to change the parity of the number of rows, and is not reversible in the sense that performing the operation again does not take us back to the | {"page_id": 340294, "title": "Pentagonal number theorem"} |
of destroying stars. Resistance to Cabal rule has resulted in entire star systems being destroyed. The intelligence also reveals that the Almighty is positioned near the Sun, breaking up the planet Mercury as fuel. Zavala tasks the Guardian to find Ikora Rey and Cayde-6 to assist in a counterattack to retake the Last City. During this time, it is shown that Ghaul, aided by his mentor, the Consul, overthrew and exiled Emperor Calus and took control of the Cabal, and has been studying the Traveler in order to learn how to utilise the Light. The Guardian locates Cayde-6 on the centaur Nessus, which has almost been completely transformed by the Vex. With the aid of Failsafe, an AI from the crashed colony ship Exodus Black, the Guardian frees Cayde from a Vex portal loop and claims a teleporter for use in taking back the city. Cayde directs the Guardian to find Ikora on the Jovian moon of Io (which the Traveler had partially terraformed until the Darkness arrived), where she had gone to find answers about the Traveler. Ikora and Io researcher Asher Mir direct the Guardian to locate a Warmind, an ancient defensive AI, for intelligence on the Almighty. This intelligence reveals that simply destroying the Almighty will take the sun down with it. Afterwards, the Vanguard reunites at the Farm and conclude that the only way to retake the Last City and save the Traveler is to shut down the Almighty first, eliminating the possibility of it destroying the Sun. The Guardian boards the Almighty using a stolen Cabal ship and disables the weapon, signaling Zavala to begin the counterattack. As the Vanguard begins the assault, the Consul admonishes Ghaul for his obsession with the Traveler, and urges him to simply take the Light by force, rather than | {"page_id": 49400008, "title": "Destiny 2"} |
decrease of both EPSP and inhibitory postsynaptic potential (IPSP) amplitudes from small to large motoneurons. This seemed to confirm Henneman's idea, but Burke disagreed, pointing out that larger neurons with a larger surface area had space for more synapses. Burke eventually showed (in a very small sample of neurons) that smaller motoneurons have a greater number of synaptic inputs from a single input source. The topic is probably still regarded as controversial. In their 1982 paper, Burke and colleagues propose that the small cell size and high surface-to-volume ratio of S motor units allows for greater metabolic activity, optimized for the "highest duty cycles" of motoneurons, while other motor unit types may be involved in "lower duty cycles." However, they state that the evidence is not conclusive "to support or deny the intuitively appealing notion that there is a correlation between metabolic activity, motoneuron size, and motor unit type." Under some circumstances, the normal order of motor unit recruitment may be altered, such that small motor units cease to fire and larger ones may be recruited. This is thought to be due to the interaction of excitatory and inhibitory motoneuronal inputs. == Recruitment of motor unit types == Another topic of controversy resides in the way in which Burke and colleagues categorized motor unit types. They designated three general groups by which motor units could be categorized: S (slow – slow twitch), FR (fast, resistant – fast twitch, fatigue-resistant), and FF (fast, fatigable – fast twitch, fatigable). These designations have served as the basis for motor unit categorization since their conception, but modern research indicates that human motor units are more complex and possibly do not directly fit this model. However, it is important to note that Burke himself recognized the risk in classifying motor units:My friend the late Elwood | {"page_id": 2255524, "title": "Motor unit recruitment"} |
Founded in 1961, the American College of Neuropsychopharmacology (ACNP) is a professional organization of leading brain and behavior scientists. The principal functions of the College are research and education. Their goals in research are to offer investigators an opportunity for cross-disciplinary communication and to promote the application of various scientific disciplines to the study of the brain's effect on behavior, with a focus on mental illness of all forms. Their educational goals are to encourage young scientists to enter research careers in neuropsychopharmacology and to develop and provide accurate information about behavioral disorders and their pharmacological treatment. == Organization == The college is an honorary society. Members are selected primarily on the basis of their original research contributions to the broad field of neuroscience. The membership of the college is drawn from scientists in multiple fields including behavioral pharmacology, neuroimaging, chronobiology, clinical psychopharmacology, epidemiology, genetics, molecular biology, neurochemistry, neuroendocrinology, neuroimmunology, neurology, neurophysiology, psychiatry, and psychology. == Annual meeting == The annual meeting of the College is a closed meeting; only the ACNP members and their invited guests may attend. Because of the College's intense concern with, and involvement in, the education and training of tomorrow's brain scientists, the College selects a number of young scientists to be invited to the annual meeting through a competitive process open to all early career researchers. This meeting, a mix of foremost brain and behavior research world-wide, is designed to encourage dialogue, discussion, and synergy by those attending. == Awards == The ACNP offers the following awards. Julius Axelrod Mentorship Award Daniel H. Efron Research Award Joel Elkes Research Award Barbara Fish Memorial Award Paul Hoch Distinguished Service Award Eva King Killam Research Award Dolores Shockley Diversity and Inclusion Advancement Award Media Award Public Service Award Women's Advocacy Award == Publication == The Springer-Nature | {"page_id": 41353264, "title": "American College of Neuropsychopharmacology"} |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 36