Dataset Viewer
text
stringlengths 4
3.77k
| source
stringclasses 3
values |
---|---|
Value creation
In marketing, a company’s value proposition is the full mix of benefits or economic value which it promises to deliver to the current and future customers (i.e., a market segment) who will buy their products and/or services. It is part of a company's overall marketing strategy which differentiates its brand and fully positions it in the market. A value proposition can apply to an entire organization, or parts thereof, or customer accounts, or products or services. Creating a value proposition is a part of the overall business strategy of a company.Kaplan and Norton note thatStrategy is based on a differentiated customer value proposition. Satisfying customers is the source of sustainable value creation. Developing a value proposition is based on a review and analysis of the benefits, costs, and value that an organization can deliver to its customers, prospective customers, and other constituent groups within and outside the organization.It is also a positioning of value, where Value = Benefits − Cost (cost includes economic risk).A value proposition can be set out as a business or marketing statement (called a "positioning statement") which summarizes why a consumer should buy a product or use a service. A compellingly worded positioning statement has the potential to convince a prospective consumer that a particular product or service which the company offers will add more value or better solve a problem (i.e. the "pain-point") for them than other similar offerings will, thus turning them into a paying client. The positioning statement usually contains references to which sector the company is operating in, what products or services they are selling, who are its target clients and which points differentiate it from other brands and make its product or service a superior choice for those clients.It is usually communicated to the customers via the company's website and other advertising and marketing materials. Conversely, a customer's value proposition is the perceived subjective value, satisfaction or usefulness of a product or service (based on its differentiating features and its personal and social values for the customer) delivered to and experienced by the customer when they acquire it. It is the net positive subjective difference between the total benefits they obtain from it and the sum of monetary cost and non-monetary sacrifices (relative benefits offered by other alternative competitive products) which they have to give up in return. | Wiki-STEM-kaggle |
Value creation
However, often there is a discrepancy between what the company thinks about its value proposition and what the clients think it is.A company's value propositions can evolve, whereby values can add up over time. For example, Apple's value proposition contains a mix of three values.Originally, in the 1980s, it communicated that its products are creative, elegant and "cool" and thus different from the status quo ("Think different"). Then in the first two decades of the 21st century, it communicated its second value of providing the customers with a reliable, smooth, hassle-free user experience within its ecosystem ("Tech that works"). In the 2020s, Apple's latest differentiating value has been the protection of its client's privacy ("Your data is safe with us"). | Wiki-STEM-kaggle |
Product model
In marketing, a product is an object, or system, or service made available for consumer use as of the consumer demand; it is anything that can be offered to a market to satisfy the desire or need of a customer. In retailing, products are often referred to as merchandise, and in manufacturing, products are bought as raw materials and then sold as finished goods. A service is also regarded as a type of product.In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project. A related concept is that of a sub-product, a secondary but useful result of a production process. Dangerous products, particularly physical ones, that cause injuries to consumers or bystanders may be subject to product liability. | Wiki-STEM-kaggle |
Customer segmentation
In marketing, market segmentation is the process of dividing a broad consumer or business market, normally consisting of existing and potential customers, into sub-groups of consumers (known as segments) based on shared characteristics. In dividing or segmenting markets, researchers typically look for common characteristics such as shared needs, common interests, similar lifestyles, or even similar demographic profiles. The overall aim of segmentation is to identify high yield segments – that is, those segments that are likely to be the most profitable or that have growth potential – so that these can be selected for special attention (i.e. become target markets). Many different ways to segment a market have been identified.Business-to-business (B2B) sellers might segment the market into different types of businesses or countries, while business-to-consumer (B2C) sellers might segment the market into demographic segments, such as lifestyle, behavior, or socioeconomic status. Market segmentation assumes that different market segments require different marketing programs – that is, different offers, prices, promotions, distribution, or some combination of marketing variables.Market segmentation is not only designed to identify the most profitable segments, but also to develop profiles of key segments in order to better understand their needs and purchase motivations. Insights from segmentation analysis are subsequently used to support marketing strategy development and planning. Many marketers use the S-T-P approach; Segmentation → Targeting → Positioning to provide the framework for marketing planning objectives. That is, a market is segmented, one or more segments are selected for targeting, and products or services are positioned in a way that resonates with the selected target market or markets. | Wiki-STEM-kaggle |
Markup overlap
In markup languages and the digital humanities, overlap occurs when a document has two or more structures that interact in a non-hierarchical manner. A document with overlapping markup cannot be represented as a tree. This is also known as concurrent markup. Overlap happens, for instance, in poetry, where there may be a metrical structure of feet and lines; a linguistic structure of sentences and quotations; and a physical structure of volumes and pages and editorial annotations. | Wiki-STEM-kaggle |
Émery topology
In martingale theory, Émery topology is a topology on the space of semimartingales. The topology is used in financial mathematics. The class of stochastic integrals with general predictable integrands coincides with the closure of the set of all simple integrals.The topology was introduced in 1979 by the french mathematician Michel Émery. | Wiki-STEM-kaggle |
Fragmentation pattern
In mass spectrometry, fragmentation is the dissociation of energetically unstable molecular ions formed from passing the molecules mass spectrum. These reactions are well documented over the decades and fragmentation patterns are useful to determine the molar weight and structural information of unknown molecules. Fragmentation that occurs in tandem mass spectrometry experiments has been a recent focus of research, because this data helps facilitate the identification of molecules. | Wiki-STEM-kaggle |
Quadrupole mass analyzer
In mass spectrometry, the quadrupole mass analyzer (or quadrupole mass filter) is a type of mass analyzer originally conceived by Nobel laureate Wolfgang Paul and his student Helmut Steinwedel. As the name implies, it consists of four cylindrical rods, set parallel to each other. In a quadrupole mass spectrometer (QMS) the quadrupole is the mass analyzer - the component of the instrument responsible for selecting sample ions based on their mass-to-charge ratio (m/z). Ions are separated in a quadrupole based on the stability of their trajectories in the oscillating electric fields that are applied to the rods. | Wiki-STEM-kaggle |
Sieving coefficient
In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream. S = C r C d {\displaystyle S={\frac {C_{r}}{C_{d}}}} where S is the sieving coefficient Cr is the mean concentration mass receiving stream Cd is the mean concentration mass donating streamA sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another.Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics. Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated. Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient. | Wiki-STEM-kaggle |
Binary phase
In materials chemistry, a binary phase or binary compound is a chemical compound containing two different elements. Some binary phase compounds are molecular, e.g. carbon tetrachloride (CCl4). More typically binary phase refers to extended solids. Famous examples zinc sulfide, which contains zinc and sulfur, and tungsten carbide, which contains tungsten and carbon.Phases with higher degrees of complexity feature more elements, e.g. three elements in ternary phases, four elements in quaternary phases. == References == | Wiki-STEM-kaggle |
Quaternary phase
In materials chemistry, a quaternary phase is a chemical compound containing four elements. Some compounds can be molecular or ionic, examples being chlorodifluoromethane (CHClF2) sodium bicarbonate (NaCO3H). More typically quaternary phase refers to extended solids. A famous example are the yttrium barium copper oxide superconductors. | Wiki-STEM-kaggle |
ABC analysis
In materials management, ABC analysis is an inventory categorisation technique. ABC analysis divides an inventory into three categories—"A items" with very tight control and accurate records, "B items" with less tightly controlled and good records, and "C items" with the simplest controls possible and minimal records. The ABC analysis provides a mechanism for identifying items that will have a significant impact on overall inventory cost, while also providing a mechanism for identifying different categories of stock that will require different management and controls. The ABC analysis suggests that inventories of an organization are not of equal value.Thus, the inventory is grouped into three categories (A, B, and C) in order of their estimated importance. 'A' items are very important for an organization.Because of the high value of these 'A' items, frequent value analysis is required. In addition to that, an organization needs to choose an appropriate order pattern (e.g. 'just-in-time') to avoid excess capacity. 'B' items are important, but of course less important than 'A' items and more important than 'C' items. Therefore, 'B' items are intergroup items. 'C' items are marginally important. | Wiki-STEM-kaggle |
Crack growth resistance curve
In materials modeled by linear elastic fracture mechanics (LEFM), crack extension occurs when the applied energy release rate G {\displaystyle G} exceeds G R {\displaystyle G_{R}} , where G R {\displaystyle G_{R}} is the material's resistance to crack extension. Conceptually G {\displaystyle G} can be thought of as the energetic gain associated with an additional infinitesimal increment of crack extension, while G R {\displaystyle G_{R}} can be thought of as the energetic penalty of an additional infinitesimal increment of crack extension. At any moment in time, if G ≥ G R {\displaystyle G\geq G_{R}} then crack extension is energetically favorable.A complication to this process is that in some materials, G R {\displaystyle G_{R}} is not a constant value during the crack extension process. A plot of crack growth resistance G R {\displaystyle G_{R}} versus crack extension Δ a {\displaystyle \Delta a} is called a crack growth resistance curve, or R-curve. A plot of energy release rate G {\displaystyle G} versus crack extension Δ a {\displaystyle \Delta a} for a particular loading configuration is called the driving force curve.The nature of the applied driving force curve relative to the material's R-curve determines the stability of a given crack. The usage of R-curves in fracture analysis is a more complex, but more comprehensive failure criteria compared to the common failure criteria that fracture occurs when G ≥ G c {\displaystyle G\geq G_{c}} where G c {\displaystyle G_{c}} is simply a constant value called the critical energy release rate. An R-curve based failure analysis takes into account the notion that a material's resistance to fracture is not necessarily constant during crack growth. R-curves can alternatively be discussed in terms of stress intensity factors ( K ) {\displaystyle (K)} rather than energy release rates ( G ) {\displaystyle (G)} , where the R-curves can be expressed as the fracture toughness ( K I c {\displaystyle K_{Ic}} , sometimes referred to as K R {\displaystyle K_{R}} ) as a function of crack length a {\displaystyle a} . | Wiki-STEM-kaggle |
Radiation length
In materials of high atomic number (e.g. tungsten, uranium, plutonium) the electrons of energies >~10 MeV predominantly lose energy by bremsstrahlung, and high-energy photons by e+e− pair production. The characteristic amount of matter traversed for these related interactions is called the radiation length X0, usually measured in g·cm−2. It is both the mean distance over which a high-energy electron loses all but 1⁄e of its energy by bremsstrahlung, and 7⁄9 of the mean free path for pair production by a high-energy photon. It is also the appropriate length scale for describing high-energy electromagnetic cascades.The radiation length for a given material consisting of a single type of nucleus can be approximated by the following expression: where Z is the atomic number and A is mass number of the nucleus. For Z > 4, a good approximation is where na is the number density of the nucleus, ℏ {\displaystyle \hbar } denotes the reduced Planck constant, me is the electron rest mass, c is the speed of light, α is the fine-structure constant.For electrons at lower energies (below few tens of MeV), the energy loss by ionization is predominant. While this definition may also be used for other electromagnetic interacting particles beyond leptons and photons, the presence of the stronger hadronic and nuclear interaction makes it a far less interesting characterisation of the material; the nuclear collision length and nuclear interaction length are more relevant. Comprehensive tables for radiation lengths and other properties of materials are available from the Particle Data Group. | Wiki-STEM-kaggle |
Giant magnetoimpedance
In materials science Giant Magnetoimpedance (GMI) is the effect that occurs in some materials where an external magnetic field causes a large variation in the electrical impedance of the material. It should not be confused with the separate physical phenomenon of Giant Magnetoresistance. | Wiki-STEM-kaggle |
Viscoelasticity
In materials science and continuum mechanics, viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like water, resist shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed. Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material. | Wiki-STEM-kaggle |
Viscous forces
In materials science and engineering, one is often interested in understanding the forces or stresses involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law, which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the deformation rate over time.These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs. Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate).Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow. In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed u {\displaystyle u} (see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from 0 {\displaystyle 0} at the bottom to u {\displaystyle u} at the top.Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed. | Wiki-STEM-kaggle |
Viscous forces
In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to u {\displaystyle u} at the top. Moreover, the magnitude of the force, F {\displaystyle F} , acting on the top plate is found to be proportional to the speed u {\displaystyle u} and the area A {\displaystyle A} of each plate, and inversely proportional to their separation y {\displaystyle y}: F = μ A u y . {\displaystyle F=\mu A{\frac {u}{y}}.}The proportionality factor is the dynamic viscosity of the fluid, often simply referred to as the viscosity. It is denoted by the Greek letter mu (μ). The dynamic viscosity has the dimensions ( m a s s / l e n g t h ) / t i m e {\displaystyle \mathrm {(mass/length)/time} } , therefore resulting in the SI units and the derived units: = k g m ⋅ s = N m 2 ⋅ s = P a ⋅ s = {\displaystyle ={\frac {\rm {kg}}{\rm {m{\cdot }s}}}={\frac {\rm {N}}{\rm {m^{2}}}}{\cdot }{\rm {s}}={\rm {Pa{\cdot }s}}=} pressure multiplied by time.The aforementioned ratio u / y {\displaystyle u/y} is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the normal vector of the plates (see illustrations to the right).If the velocity does not vary linearly with y {\displaystyle y} , then the appropriate generalization is: τ = μ ∂ u ∂ y , {\displaystyle \tau =\mu {\frac {\partial u}{\partial y}},} where τ = F / A {\displaystyle \tau =F/A} , and ∂ u / ∂ y {\displaystyle \partial u/\partial y} is the local shear velocity. This expression is referred to as Newton's law of viscosity. | Wiki-STEM-kaggle |
Viscous forces
In shearing flows with planar symmetry, it is what defines μ {\displaystyle \mu } . It is a special case of the general definition of viscosity (see below), which can be expressed in coordinate-free form. Use of the Greek letter mu ( μ {\displaystyle \mu } ) for the dynamic viscosity (sometimes also called the absolute viscosity) is common among mechanical and chemical engineers, as well as mathematicians and physicists.However, the Greek letter eta ( η {\displaystyle \eta } ) is also used by chemists, physicists, and the IUPAC. The viscosity μ {\displaystyle \mu } is sometimes also called the shear viscosity. However, at least one author discourages the use of this terminology, noting that μ {\displaystyle \mu } can appear in non-shearing flows in addition to shearing flows. | Wiki-STEM-kaggle |
Functionally graded element
In materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material. | Wiki-STEM-kaggle |
Thermostability
In materials science and molecular biology, thermostability is the ability of a substance to resist irreversible change in its chemical or physical structure, often by resisting decomposition or polymerization, at a high relative temperature. Thermostable materials may be used industrially as fire retardants. A thermostable plastic, an uncommon and unconventional term, is likely to refer to a thermosetting plastic that cannot be reshaped when heated, than to a thermoplastic that can be remelted and recast. Thermostability is also a property of some proteins. To be a thermostable protein means to be resistant to changes in protein structure due to applied heat. | Wiki-STEM-kaggle |
Poisson's Ratio
In materials science and solid mechanics, Poisson's ratio ν {\displaystyle \nu } (nu) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, ν {\displaystyle \nu } is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5.For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2–0.3. The ratio is named after the French mathematician and physicist Siméon Poisson. | Wiki-STEM-kaggle |
Residual stress
In materials science and solid mechanics, residual stresses are stresses that remain in a solid material after the original cause of the stresses has been removed. Residual stress may be desirable or undesirable. For example, laser peening imparts deep beneficial compressive residual stresses into metal components such as turbine engine fan blades, and it is used in toughened glass to allow for large, thin, crack- and scratch-resistant glass displays on smartphones. However, unintended residual stress in a designed structure may cause it to fail prematurely.Residual stresses can result from a variety of mechanisms including inelastic (plastic) deformations, temperature gradients (during thermal cycle) or structural changes (phase transformation). Heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. Another example occurs during semiconductor fabrication and microsystem fabrication when thin film materials with different thermal and crystalline properties are deposited sequentially under different process conditions. The stress variation through a stack of thin film materials can be very complex and can vary between compressive and tensile stresses from layer to layer. | Wiki-STEM-kaggle |
Ostwald's step rule
In materials science, Ostwald's rule or Ostwald's step rule, conceived by Wilhelm Ostwald, describes the formation of polymorphs. The rule states that usually the less stable polymorph crystallizes first. Ostwald's rule is not a universal law but a common tendency observed in nature.This can be explained on the basis of irreversible thermodynamics, structural relationships, or a combined consideration of statistical thermodynamics and structural variation with temperature.Unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. For example, out of hot water, metastable, fibrous crystals of benzamide appear first, only later to spontaneously convert to the more stable rhombic polymorph. Another example is magnesium carbonate, which more readily forms dolomite.A dramatic example is phosphorus, which upon sublimation first forms the less stable white phosphorus, which only slowly polymerizes to the red allotrope. This is notably the case for the anatase polymorph of titanium dioxide, which having a lower surface energy is commonly the first phase to form by crystallisation from amorphous precursors or solutions despite being metastable, with rutile being the equilibrium phase at all temperatures and pressures. == References == | Wiki-STEM-kaggle |
Schmid's law
In materials science, Schmid's law (also Schmid factor) describes the slip plane and the slip direction of a stressed material, which can resolve the most shear stress. Schmid's Law states that the critically resolved shear stress (τ) is equal to the stress applied to the material (σ) multiplied by the cosine of the angle with the vector normal to the glide plane (φ) and the cosine of the angle with the glide direction (λ). Which can be expressed as: τ = m σ {\displaystyle \tau =m\sigma } where m is known as the Schmid factor m = cos ( ϕ ) cos ( λ ) {\displaystyle m=\cos(\phi )\cos(\lambda )} Both factors τ and σ are measured in stress units, which is calculated the same way as pressure (force divided by area). φ and λ are angles. The factor is named after Erich Schmid who coauthored a book with Walter Boas introducing the concept in 1935. | Wiki-STEM-kaggle |
Lomer–Cottrell junction
In materials science, a Lomer–Cottrell junction is a particular configuration of dislocations. When two perfect dislocations encounter along a slip plane, each perfect dislocation can split into two Shockley partial dislocations: a leading dislocation and a trailing dislocation. When the two leading Shockley partials combine, they form a separate dislocation with a burgers vector that is not in the slip plane. This is the Lomer–Cottrell dislocation.It is sessile and immobile in the slip plane, acting as a barrier against other dislocations in the plane. The trailing dislocations pile up behind the Lomer–Cottrell dislocation, and an ever greater force is required to push additional dislocations into the pile-up. ex. FCC lattice along {111} slip planes |leading| |trailing| a 2 → a 6 + a 6 {\displaystyle {\frac {a}{2}}\rightarrow {\frac {a}{6}}+{\frac {a}{6}}} a 2 → a 6 + a 6 {\displaystyle {\frac {a}{2}}\rightarrow {\frac {a}{6}}+{\frac {a}{6}}} Combination of leading dislocations: a 6 + a 6 → a 3 {\displaystyle {\frac {a}{6}}+{\frac {a}{6}}\rightarrow {\frac {a}{3}}} The resulting dislocation is along the crystal face, which is not a slip plane in FCC at room temperature. Lomer–Cottrell dislocation == References == | Wiki-STEM-kaggle |
Rule of mixtures
In materials science, a general rule of mixtures is a weighted mean used to predict various properties of a composite material . It provides a theoretical upper- and lower-bound on properties such as the elastic modulus, ultimate tensile strength, thermal conductivity, and electrical conductivity. In general there are two models, one for axial loading (Voigt model), and one for transverse loading (Reuss model).In general, for some material property E {\displaystyle E} (often the elastic modulus), the rule of mixtures states that the overall property in the direction parallel to the fibers may be as high as E c = f E f + ( 1 − f ) E m {\displaystyle E_{c}=fE_{f}+\left(1-f\right)E_{m}} where f = V f V f + V m {\displaystyle f={\frac {V_{f}}{V_{f}+V_{m}}}} is the volume fraction of the fibers E f {\displaystyle E_{f}} is the material property of the fibers E m {\displaystyle E_{m}} is the material property of the matrixIt is a common mistake to believe that this is the upper-bound modulus for Young's modulus.The real upper-bound Young's modulus is larger than E c {\displaystyle E_{c}} given by this formula. Even if both constituents are isotropic, the real upper bound is E c {\displaystyle E_{c}} plus a term in the order of square of the difference of the Poisson's ratios of the two constituents.The inverse rule of mixtures states that in the direction perpendicular to the fibers, the elastic modulus of a composite can be as low as E c = ( f E f + 1 − f E m ) − 1 . {\displaystyle E_{c}=\left({\frac {f}{E_{f}}}+{\frac {1-f}{E_{m}}}\right)^{-1}.} If the property under study is the elastic modulus, this quantity is called the lower-bound modulus, and corresponds to a transverse loading. | Wiki-STEM-kaggle |
Matrix (composite)
In materials science, a matrix is a constituent of a composite material. | Wiki-STEM-kaggle |
Creep (deformation)
In materials science, creep (sometimes called cold flow) is the tendency of a solid material to undergo slow deformation while subject to persistent mechanical stresses. It can occur as a result of long-term exposure to high levels of stress that are still below the yield strength of the material. Creep is more severe in materials that are subjected to heat for long periods and generally increase as they near their melting point. The rate of deformation is a function of the material's properties, exposure time, exposure temperature and the applied structural load.Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function – for example creep of a turbine blade could cause the blade to contact the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and metallurgists when evaluating components that operate under high stresses or high temperatures. Creep is a deformation mechanism that may or may not constitute a failure mode.For example, moderate creep in concrete is sometimes welcomed because it relieves tensile stresses that might otherwise lead to cracking. Unlike brittle fracture, creep deformation does not occur suddenly upon the application of stress. Instead, strain accumulates as a result of long-term stress. Therefore, creep is a "time-dependent" deformation. | Wiki-STEM-kaggle |
Critical resolved shear stress
In materials science, critical resolved shear stress (CRSS) is the component of shear stress, resolved in the direction of slip, necessary to initiate slip in a grain. Resolved shear stress (RSS) is the shear component of an applied tensile or compressive stress resolved along a slip plane that is other than perpendicular or parallel to the stress axis. The RSS is related to the applied stress by a geometrical factor, m, typically the Schmid factor: τ RSS = σ app m = σ app ( cos ϕ cos λ ) {\displaystyle \tau _{\text{RSS}}=\sigma _{\text{app}}m=\sigma _{\text{app}}(\cos \phi \cos \lambda )} where σapp is the magnitude of the applied tensile stress, Φ is the angle between the normal of the slip plane and the direction of the applied force, and λ is the angle between the slip direction and the direction of the applied force. The Schmid factor is most applicable to FCC single-crystal metals, but for polycrystal metals the Taylor factor has been shown to be more accurate.The CRSS is the value of resolved shear stress at which yielding of the grain occurs, marking the onset of plastic deformation. CRSS, therefore, is a material property and is not dependent on the applied load or grain orientation. The CRSS is related to the observed yield strength of the material by the maximum value of the Schmid factor: σ y = τ CRSS m max {\displaystyle \sigma _{y}={\frac {\tau _{\text{CRSS}}}{m_{\text{max}}}}} CRSS is a constant for crystal families. Hexagonal close-packed crystals, for example, have three main families - basal, prismatic, and pyramidal - with different values for the critical resolved shear stress. | Wiki-STEM-kaggle |
Friability
In materials science, friability ( FRY-ə-BIL-ə-tee), the condition of being friable, describes the tendency of a solid substance to break into smaller pieces under duress or contact, especially by rubbing. The opposite of friable is indurate. Substances that are designated hazardous, such as asbestos or crystalline silica, are often said to be friable if small particles are easily dislodged and become airborne, and hence respirable (able to enter human lungs), thereby posing a health hazard. Tougher substances, such as concrete, may also be mechanically ground down and reduced to finely divided mineral dust.However, such substances are not generally considered friable because of the degree of difficulty involved in breaking the substance's chemical bonds through mechanical means. Some substances, such as polyurethane foams, show an increase in friability with exposure to ultraviolet radiation, as in sunlight. Friable is sometimes used metaphorically to describe "brittle" personalities who can be "rubbed" by seemingly-minor stimuli to produce extreme emotional responses. | Wiki-STEM-kaggle |
Lamellar structure
In materials science, lamellar structures or microstructures are composed of fine, alternating layers of different materials in the form of lamellae. They are often observed in cases where a phase transition front moves quickly, leaving behind two solid products, as in rapid cooling of eutectic (such as solder) or eutectoid (such as pearlite) systems. Such conditions force phases of different composition to form but allow little time for diffusion to produce those phases' equilibrium compositions. Fine lamellae solve this problem by shortening the diffusion distance between phases, but their high surface energy makes them unstable and prone to break up when annealing allows diffusion to progress.A deeper eutectic or more rapid cooling will result in finer lamellae; as the size of an individual lamellum approaches zero, the system will instead retain its high-temperature structure. Two common cases of this include cooling a liquid to form an amorphous solid, and cooling eutectoid austenite to form martensite. In biology, normal adult bones possess a lamellar structure which may be disrupted by some diseases. == References == | Wiki-STEM-kaggle |
Reinforcement (composite)
In materials science, reinforcement is a constituent of a composite material which increases the composite's stiffness and tensile strength. | Wiki-STEM-kaggle |
Modulus of rigidity
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: G = d e f τ x y γ x y = F / A Δ x / l = F l A Δ x {\displaystyle G\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\tau _{xy}}{\gamma _{xy}}}={\frac {F/A}{\Delta x/l}}={\frac {Fl}{A\Delta x}}} where τ x y = F / A {\displaystyle \tau _{xy}=F/A\,} = shear stress F {\displaystyle F} is the force which acts A {\displaystyle A} is the area on which the force acts γ x y {\displaystyle \gamma _{xy}} = shear strain. In engineering := Δ x / l = tan θ {\displaystyle :=\Delta x/l=\tan \theta } , elsewhere := θ {\displaystyle :=\theta } Δ x {\displaystyle \Delta x} is the transverse displacement l {\displaystyle l} is the initial length of the area.The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. | Wiki-STEM-kaggle |
Superplastic deformation
In materials science, superplasticity is a state in which solid crystalline material is deformed well beyond its usual breaking point, usually over about 400% during tensile deformation. Such a state is usually achieved at high homologous temperature. Examples of superplastic materials are some fine-grained metals and ceramics. Other non-crystalline materials (amorphous) such as silica glass ("molten glass") and polymers also deform similarly, but are not called superplastic, because they are not crystalline; rather, their deformation is often described as Newtonian fluid.Superplastically deformed material gets thinner in a very uniform manner, rather than forming a "neck" (a local narrowing) that leads to fracture. Also, the formation of microvoids, which is another cause of early fracture, is inhibited. Superplasticity must not be confused with superelasticity. | Wiki-STEM-kaggle |
Burgers vector
In materials science, the Burgers vector, named after Dutch physicist Jan Burgers, is a vector, often denoted as b, that represents the magnitude and direction of the lattice distortion resulting from a dislocation in a crystal lattice. The vector's magnitude and direction is best understood when the dislocation-bearing crystal structure is first visualized without the dislocation, that is, the perfect crystal structure. In this perfect crystal structure, a rectangle whose lengths and widths are integer multiples of a (the unit cell edge length) is drawn encompassing the site of the original dislocation's origin. Once this encompassing rectangle is drawn, the dislocation can be introduced.This dislocation will have the effect of deforming, not only the perfect crystal structure, but the rectangle as well. The said rectangle could have one of its sides disjoined from the perpendicular side, severing the connection of the length and width line segments of the rectangle at one of the rectangle's corners, and displacing each line segment from each other. What was once a rectangle before the dislocation was introduced is now an open geometric figure, whose opening defines the direction and magnitude of the Burgers vector.Specifically, the breadth of the opening defines the magnitude of the Burgers vector, and, when a set of fixed coordinates is introduced, an angle between the termini of the dislocated rectangle's length line segment and width line segment may be specified. When calculating the Burgers vector practically, one may draw a rectangular counterclockwise circuit (Burgers circuit) from a starting point to enclose the dislocation (see the picture above). The Burgers vector will be the vector to complete the circuit, i.e., from the end to the start of the circuit.The direction of the vector depends on the plane of dislocation, which is usually on one of the closest-packed crystallographic planes. | Wiki-STEM-kaggle |
Burgers vector
In most metallic materials, the magnitude of the Burgers vector for a dislocation is of a magnitude equal to the interatomic spacing of the material, since a single dislocation will offset the crystal lattice by one close-packed crystallographic spacing unit. In edge dislocations, the Burgers vector and dislocation line are perpendicular to one another. In screw dislocations, they are parallel.The Burgers vector is significant in determining the yield strength of a material by affecting solute hardening, precipitation hardening and work hardening. The Burgers vector plays an important role in determining the direction of dislocation line. | Wiki-STEM-kaggle |
Charpy test
In materials science, the Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain rate test which determines the amount of energy absorbed by a material during fracture. Absorbed energy is a measure of the material's notch toughness. It is widely used in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply.A disadvantage is that some results are only comparative. The test was pivotal in understanding the fracture problems of ships during World War II.The test was developed around 1900 by S. B. Russell (1898, American) and Georges Charpy (1901, French). The test became known as the Charpy test in the early 1900s due to the technical contributions and standardization efforts by Charpy. | Wiki-STEM-kaggle |
Zener–Hollomon parameter
In materials science, the Zener–Hollomon parameter, typically denoted as Z, is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. It is given by Z = ε ˙ exp ( Q / R T ) {\displaystyle Z={\dot {\varepsilon }}\exp(Q/RT)} where ε ˙ {\textstyle {\dot {\varepsilon }}} is the strain rate, Q is the activation energy, R is the gas constant, and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition.It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel.When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation. | Wiki-STEM-kaggle |
Threshold displacement energy
In materials science, the threshold displacement energy (Td) is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal, a separate threshold displacement energy exists for each crystallographic direction.Then one should distinguish between the minimum (Td,min) and average (Td,ave) over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV. | Wiki-STEM-kaggle |
Yield strength anomaly
In materials science, the yield strength anomaly refers to materials wherein the yield strength (i.e., the stress necessary to initiate plastic yielding) increases with temperature. For the majority of materials, the yield strength decreases with increasing temperature. In metals, this decrease in yield strength is due to the thermal activation of dislocation motion, resulting in easier plastic deformation at higher temperatures.In some cases, a yield strength anomaly refers to a decrease in the ductility of a material with increasing temperature, which is also opposite the trend in the majority of materials. Anomalies in ductility can be more clear, as an anomalous effect on yield strength can be obscured by its typical decrease with temperature.In concert with yield strength or ductility anomalies, some materials demonstrate extrema in other temperature dependent properties, such as a minimum in ultrasonic damping, or a maximum in electrical conductivity.The yield strength anomaly in β-brass was one of the earliest discoveries such a phenomenon, and several other ordered intermetallic alloys demonstrate this effect. Precipitation-hardened superalloys exhibit a yield strength anomaly over a considerable temperature range. For these materials, the yield strength shows little variation between room temperature and several hundred degrees Celsius.Eventually, a maximum yield strength is reached. For even higher temperatures, the yield strength decreases and, eventually, drops to zero when reaching the melting temperature, where the solid material transforms into a liquid. For ordered intermetallics, the temperature of the yield strength peak is roughly 50% of the absolute melting temperature. | Wiki-STEM-kaggle |
Antiferromagnetic interaction
In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933.Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic. | Wiki-STEM-kaggle |
Pseudo-differential operator
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space. | Wiki-STEM-kaggle |
Oscillatory integral
In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals. | Wiki-STEM-kaggle |
Z-order curve
In mathematical analysis and computer science, functions which are Z-order, Lebesgue curve, Morton space-filling curve, Morton order or Morton code map multidimensional data to one dimension while preserving locality of the data points. It is named in France after Henri Lebesgue, who studied it in 1904, and named in the United States after Guy Macdonald Morton, who first applied the order to file sequencing in 1966. The z-value of a point in multidimensions is simply calculated by interleaving the binary representations of its coordinate values. Once the data are sorted into this ordering, any one-dimensional data structure can be used, such as simple one dimensional arrays, binary search trees, B-trees, skip lists or (with low significant bits truncated) hash tables. The resulting ordering can equivalently be described as the order one would get from a depth-first traversal of a quadtree or octree. | Wiki-STEM-kaggle |
Sigma field
In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space. The σ-algebras are a subset of the set algebras; elements of the latter only need to be closed under the union or intersection of finitely many subsets, which is a weaker condition.The main use of σ-algebras is in the definition of measures; specifically, the collection of those subsets for which a given measure is defined is necessarily a σ-algebra. This concept is important in mathematical analysis as the foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which can be assigned probabilities.Also, in probability, σ-algebras are pivotal in the definition of conditional expectation. In statistics, (sub) σ-algebras are needed for the formal mathematical definition of a sufficient statistic, particularly when the statistic is a function or a random process and the notion of conditional density is not applicable. If X = { a , b , c , d } {\displaystyle X=\{a,b,c,d\}} one possible σ-algebra on X {\displaystyle X} is Σ = { ∅ , { a , b } , { c , d } , { a , b , c , d } } , {\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},} where ∅ {\displaystyle \varnothing } is the empty set. | Wiki-STEM-kaggle |
Sigma field
In general, a finite algebra is always a σ-algebra. If { A 1 , A 2 , A 3 , … } , {\displaystyle \{A_{1},A_{2},A_{3},\ldots \},} is a countable partition of X {\displaystyle X} then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy). | Wiki-STEM-kaggle |
Multi-variable function
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of R n {\displaystyle \mathbb {R} ^{n}} for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of R n {\displaystyle \mathbb {R} ^{n}} . | Wiki-STEM-kaggle |
Bounded poset
In mathematical analysis and related areas of mathematics, a set is called bounded if it is, in a certain sense, of finite measure. Conversely, a set which is not bounded is called unbounded. The word "bounded" makes no sense in a general topological space without a corresponding metric.Boundary is a distinct concept: for example, a circle in isolation is a boundaryless bounded set, while the half plane is unbounded yet has a boundary. A bounded set is not necessarily a closed set and vice versa. For example, a subset S of a 2-dimensional real space R2 constrained by two parabolic curves x2 + 1 and x2 - 1 defined in a Cartesian coordinate system is closed by the curves but not bounded (so unbounded). | Wiki-STEM-kaggle |
Agmon's inequality
In mathematical analysis, Agmon's inequalities, named after Shmuel Agmon, consist of two closely related interpolation inequalities between the Lebesgue space L ∞ {\displaystyle L^{\infty }} and the Sobolev spaces H s {\displaystyle H^{s}} . It is useful in the study of partial differential equations. Let u ∈ H 2 ( Ω ) ∩ H 0 1 ( Ω ) {\displaystyle u\in H^{2}(\Omega )\cap H_{0}^{1}(\Omega )} where Ω ⊂ R 3 {\displaystyle \Omega \subset \mathbb {R} ^{3}} . Then Agmon's inequalities in 3D state that there exists a constant C {\displaystyle C} such that ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ H 1 ( Ω ) 1 / 2 ‖ u ‖ H 2 ( Ω ) 1 / 2 , {\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{H^{1}(\Omega )}^{1/2}\|u\|_{H^{2}(\Omega )}^{1/2},} and ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ L 2 ( Ω ) 1 / 4 ‖ u ‖ H 2 ( Ω ) 3 / 4 . | Wiki-STEM-kaggle |
Agmon's inequality
{\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{L^{2}(\Omega )}^{1/4}\|u\|_{H^{2}(\Omega )}^{3/4}.} In 2D, the first inequality still holds, but not the second: let u ∈ H 2 ( Ω ) ∩ H 0 1 ( Ω ) {\displaystyle u\in H^{2}(\Omega )\cap H_{0}^{1}(\Omega )} where Ω ⊂ R 2 {\displaystyle \Omega \subset \mathbb {R} ^{2}} . Then Agmon's inequality in 2D states that there exists a constant C {\displaystyle C} such that ‖ u ‖ L ∞ ( Ω ) ≤ C ‖ u ‖ L 2 ( Ω ) 1 / 2 ‖ u ‖ H 2 ( Ω ) 1 / 2 . {\displaystyle \displaystyle \|u\|_{L^{\infty }(\Omega )}\leq C\|u\|_{L^{2}(\Omega )}^{1/2}\|u\|_{H^{2}(\Omega )}^{1/2}.} For the n {\displaystyle n} -dimensional case, choose s 1 {\displaystyle s_{1}} and s 2 {\displaystyle s_{2}} such that s 1 < n 2 < s 2 {\displaystyle s_{1}<{\tfrac {n}{2}} | Wiki-STEM-kaggle |
Cesàro summation
In mathematical analysis, Cesàro summation (also known as the Cesàro mean) assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series. This special case of a matrix summability method is named for the Italian analyst Ernesto Cesàro (1859–1906). The term summation can be misleading, as some statements and proofs regarding Cesàro summation can be said to implicate the Eilenberg–Mazur swindle. For example, it is commonly applied to Grandi's series with the conclusion that the sum of that series is 1/2. | Wiki-STEM-kaggle |
Clairaut's equation
In mathematical analysis, Clairaut's equation (or the Clairaut equation) is a differential equation of the form y ( x ) = x d y d x + f ( d y d x ) {\displaystyle y(x)=x{\frac {dy}{dx}}+f\left({\frac {dy}{dx}}\right)} where f {\displaystyle f} is continuously differentiable. It is a particular case of the Lagrange differential equation. It is named after the French mathematician Alexis Clairaut, who introduced it in 1734. | Wiki-STEM-kaggle |
Darboux's formula
In mathematical analysis, Darboux's formula is a formula introduced by Gaston Darboux (1876) for summing infinite series by using integrals or evaluating integrals using infinite series. It is a generalization to the complex plane of the Euler–Maclaurin summation formula, which is used for similar purposes and derived in a similar manner (by repeated integration by parts of a particular choice of integrand). Darboux's formula can also be used to derive the Taylor series from calculus. | Wiki-STEM-kaggle |
Dini continuity
In mathematical analysis, Dini continuity is a refinement of continuity. Every Dini continuous function is continuous. Every Lipschitz continuous function is Dini continuous. | Wiki-STEM-kaggle |
Ehrenpreis's fundamental principle
In mathematical analysis, Ehrenpreis's fundamental principle, introduced by Leon Ehrenpreis, states: Every solution of a system (in general, overdetermined) of homogeneous partial differential equations with constant coefficients can be represented as the integral with respect to an appropriate Radon measure over the complex “characteristic variety” of the system. == References == | Wiki-STEM-kaggle |
Ekeland's variational principle
In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist nearly optimal solutions to some optimization problems. Ekeland's principle can be used when the lower level set of a minimization problems is not compact, so that the Bolzano–Weierstrass theorem cannot be applied. The principle relies on the completeness of the metric space.The principle has been shown to be equivalent to completeness of metric spaces. In proof theory, it is equivalent to Π11CA0 over RCA0, i.e. relatively strong. It also leads to a quick proof of the Caristi fixed point theorem. | Wiki-STEM-kaggle |
Fourier integral operator
In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. The class of Fourier integral operators contains differential operators as well as classical integral operators as special cases. A Fourier integral operator T {\displaystyle T} is given by: ( T f ) ( x ) = ∫ R n e 2 π i Φ ( x , ξ ) a ( x , ξ ) f ^ ( ξ ) d ξ {\displaystyle (Tf)(x)=\int _{\mathbb {R} ^{n}}e^{2\pi i\Phi (x,\xi )}a(x,\xi ){\hat {f}}(\xi )\,d\xi } where f ^ {\displaystyle {\hat {f}}} denotes the Fourier transform of f {\displaystyle f} , a ( x , ξ ) {\displaystyle a(x,\xi )} is a standard symbol which is compactly supported in x {\displaystyle x} and Φ {\displaystyle \Phi } is real valued and homogeneous of degree 1 {\displaystyle 1} in ξ {\displaystyle \xi } . It is also necessary to require that det ( ∂ 2 Φ ∂ x i ∂ ξ j ) ≠ 0 {\displaystyle \det \left({\frac {\partial ^{2}\Phi }{\partial x_{i}\,\partial \xi _{j}}}\right)\neq 0} on the support of a. Under these conditions, if a is of order zero, it is possible to show that T {\displaystyle T} defines a bounded operator from L 2 {\displaystyle L^{2}} to L 2 {\displaystyle L^{2}} . | Wiki-STEM-kaggle |
Fubini's Theorem
In mathematical analysis, Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if the double integral yields a finite answer when the integrand is replaced by its absolute value. Fubini's theorem implies that two iterated integrals are equal to the corresponding double integral across its integrands. | Wiki-STEM-kaggle |
Fubini's Theorem
Tonelli's theorem, introduced by Leonida Tonelli in 1909, is similar, but applies to a non-negative measurable function rather than one integrable over their domains. A related theorem is often called Fubini's theorem for infinite series, which states that if { a m , n } m = 1 , n = 1 ∞ {\textstyle \{a_{m,n}\}_{m=1,n=1}^{\infty }} is a doubly-indexed sequence of real numbers, and if ∑ ( m , n ) ∈ N × N a m , n {\textstyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}} is absolutely convergent, then ∑ ( m , n ) ∈ N × N a m , n = ∑ m = 1 ∞ ∑ n = 1 ∞ a m , n = ∑ n = 1 ∞ ∑ m = 1 ∞ a m , n {\displaystyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}=\sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{m,n}=\sum _{n=1}^{\infty }\sum _{m=1}^{\infty }a_{m,n}} Although Fubini's theorem for infinite series is a special case of the more general Fubini's theorem, it is not appropriate to characterize it as a logical consequence of Fubini's theorem. This is because some properties of measures, in particular sub-additivity, are often proved using Fubini's theorem for infinite series. In this case, Fubini's general theorem is a logical consequence of Fubini's theorem for infinite series. | Wiki-STEM-kaggle |
Glaeser's continuity theorem
In mathematical analysis, Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class C 2 {\displaystyle C^{2}} . It was introduced in 1963 by Georges Glaeser, and was later simplified by Jean Dieudonné.The theorem states: Let f: U → R 0 + {\displaystyle f\ :\ U\rightarrow \mathbb {R} _{0}^{+}} be a function of class C 2 {\displaystyle C^{2}} in an open set U contained in R n {\displaystyle \mathbb {R} ^{n}} , then f {\displaystyle {\sqrt {f}}} is of class C 1 {\displaystyle C^{1}} in U if and only if its partial derivatives of first and second order vanish in the zeros of f. == References == | Wiki-STEM-kaggle |
Haar's Tauberian theorem
In mathematical analysis, Haar's Tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood Tauberian theorem. | Wiki-STEM-kaggle |
Heine's Reciprocal Square Root Identity
In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as where Q m − 1 2 {\displaystyle Q_{m-{\frac {1}{2}}}} is a Legendre function of the second kind, which has degree, m − 1⁄2, a half-integer, and argument, z, real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows where Γ {\displaystyle \scriptstyle \,\Gamma } is the Gamma function. == References == | Wiki-STEM-kaggle |
Hölder's inequality
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces. The numbers p and q above are said to be Hölder conjugates of each other. The special case p = q = 2 gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if ‖fg‖1 is infinite, the right-hand side also being infinite in that case.Conversely, if f is in Lp(μ) and g is in Lq(μ), then the pointwise product fg is in L1(μ). Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space Lp(μ), and also to establish that Lq(μ) is the dual space of Lp(μ) for p ∈ [1, ∞). Hölder's inequality (in a slightly different form) was first found by Leonard James Rogers (1888). Inspired by Rogers' work, Hölder (1889) gave another proof as part of a work developing the concept of convex and concave functions and introducing Jensen's inequality, which was in turn named for work of Johan Jensen building on Hölder's work. | Wiki-STEM-kaggle |
Korn's inequality
In mathematical analysis, Korn's inequality is an inequality concerning the gradient of a vector field that generalizes the following classical theorem: if the gradient of a vector field is skew-symmetric at every point, then the gradient must be equal to a constant skew-symmetric matrix. Korn's theorem is a quantitative version of this statement, which intuitively says that if the gradient of a vector field is on average not far from the space of skew-symmetric matrices, then the gradient must not be far from a particular skew-symmetric matrix. The statement that Korn's inequality generalizes thus arises as a special case of rigidity. In (linear) elasticity theory, the symmetric part of the gradient is a measure of the strain that an elastic body experiences when it is deformed by a given vector-valued function. The inequality is therefore an important tool as an a priori estimate in linear elasticity theory. | Wiki-STEM-kaggle |
Krein's condition
In mathematical analysis, Krein's condition provides a necessary and sufficient condition for exponential sums { ∑ k = 1 n a k exp ( i λ k x ) , a k ∈ C , λ k ≥ 0 } , {\displaystyle \left\{\sum _{k=1}^{n}a_{k}\exp(i\lambda _{k}x),\quad a_{k}\in \mathbb {C} ,\,\lambda _{k}\geq 0\right\},} to be dense in a weighted L2 space on the real line. It was discovered by Mark Krein in the 1940s. A corollary, also called Krein's condition, provides a sufficient condition for the indeterminacy of the moment problem. | Wiki-STEM-kaggle |
Lambert summation
In mathematical analysis, Lambert summation is a summability method for a class of divergent series. | Wiki-STEM-kaggle |
Lipschitz function
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; the smallest such bound is called the Lipschitz constant of the function (and is related to the modulus of uniform continuity). For instance, every function that is defined on an interval and has bounded first derivative is Lipschitz continuous.In the theory of differential equations, Lipschitz continuity is the central condition of the Picard–Lindelöf theorem which guarantees the existence and uniqueness of the solution to an initial value problem. A special type of Lipschitz continuity, called contraction, is used in the Banach fixed-point theorem.We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: Continuously differentiable ⊂ Lipschitz continuous ⊂ α {\displaystyle \alpha } -Hölder continuous,where 0 < α ≤ 1 {\displaystyle 0<\alpha \leq 1} . We also have Lipschitz continuous ⊂ absolutely continuous ⊂ uniformly continuous. | Wiki-STEM-kaggle |
Littlewood's 4/3 inequality
In mathematical analysis, Littlewood's 4/3 inequality, named after John Edensor Littlewood, is an inequality that holds for every complex-valued bilinear form defined on c 0 {\displaystyle c_{0}} , the Banach space of scalar sequences that converge to zero. Precisely, let B: c 0 × c 0 → C {\displaystyle B:c_{0}\times c_{0}\to \mathbb {C} } or R {\displaystyle \mathbb {R} } be a bilinear form. Then the following holds: ( ∑ i , j = 1 ∞ | B ( e i , e j ) | 4 / 3 ) 3 / 4 ≤ 2 ‖ B ‖ , {\displaystyle \left(\sum _{i,j=1}^{\infty }|B(e_{i},e_{j})|^{4/3}\right)^{3/4}\leq {\sqrt {2}}\|B\|,} where ‖ B ‖ = sup { | B ( x 1 , x 2 ) |: ‖ x i ‖ ∞ ≤ 1 } .{\displaystyle \|B\|=\sup\{|B(x_{1},x_{2})|:\|x_{i}\|_{\infty }\leq 1\}.} The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. It is also known that for real scalars the aforementioned constant is sharp. | Wiki-STEM-kaggle |
Lorentz space
In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar L p {\displaystyle L^{p}} spaces. The Lorentz spaces are denoted by L p , q {\displaystyle L^{p,q}} . Like the L p {\displaystyle L^{p}} spaces, they are characterized by a norm (technically a quasinorm) that encodes information about the "size" of a function, just as the L p {\displaystyle L^{p}} norm does. The two basic qualitative notions of "size" of a function are: how tall is the graph of the function, and how spread out is it. The Lorentz norms provide tighter control over both qualities than the L p {\displaystyle L^{p}} norms, by exponentially rescaling the measure in both the range ( p {\displaystyle p} ) and the domain ( q {\displaystyle q} ). The Lorentz norms, like the L p {\displaystyle L^{p}} norms, are invariant under arbitrary rearrangements of the values of a function. | Wiki-STEM-kaggle |
Mosco convergence
In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space X. In finite dimensional spaces, Mosco convergence coincides with epi-convergence, while in infinite-dimensional ones, Mosco convergence is strictly stronger property. Mosco convergence is named after Italian mathematician Umberto Mosco, a current Harold J. Gay professor of mathematics at Worcester Polytechnic Institute. | Wiki-STEM-kaggle |
Netto's theorem
In mathematical analysis, Netto's theorem states that continuous bijections of smooth manifolds preserve dimension. That is, there does not exist a continuous bijection between two smooth manifolds of different dimension. It is named after Eugen Netto.The case for maps from a higher-dimensional manifold to a one-dimensional manifold was proven by Jacob Lüroth in 1878, using the intermediate value theorem to show that no manifold containing a topological circle can be mapped continuously and bijectively to the real line. Both Netto in 1878, and Georg Cantor in 1879, gave faulty proofs of the general theorem.The faults were later recognized and corrected.An important special case of this theorem concerns the non-existence of continuous bijections from one-dimensional spaces, such as the real line or unit interval, to two-dimensional spaces, such as the Euclidean plane or unit square. The conditions of the theorem can be relaxed in different ways to obtain interesting classes of functions from one-dimensional spaces to two-dimensional spaces: Space-filling curves are surjective continuous functions from one-dimensional spaces to two-dimensional spaces. They cover every point of the plane, or of a unit square, by the image of a line or unit interval.Examples include the Peano curve and Hilbert curve. Neither of these examples has any self-crossings, but by Netto's theorem there are many points of the square that are covered multiple times by these curves. Osgood curves are continuous bijections from one-dimensional spaces to subsets of the plane that have nonzero area.They form Jordan curves in the plane. However, by Netto's theorem, they cannot cover the entire plane, unit square, or any other two-dimensional region.If one relaxes the requirement of continuity, then all smooth manifolds of bounded dimension have equal cardinality, the cardinality of the continuum. Therefore, there exist discontinuous bijections between any two of them, as Georg Cantor showed in 1878. Cantor's result came as a surprise to many mathematicians and kicked off the line of research leading to space-filling curves, Osgood curves, and Netto's theorem. | Wiki-STEM-kaggle |
Netto's theorem
A near-bijection from the unit square to the unit interval can be obtained by interleaving the digits of the decimal representations of the Cartesian coordinates of points in the square. The ambiguities of decimal, exemplified by the two decimal representations of 1 = 0.999..., cause this to be an injection rather than a bijection, but this issue can be repaired by using the Schröder–Bernstein theorem. == References == | Wiki-STEM-kaggle |
Parseval's formula
In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. Geometrically, it is a generalized Pythagorean theorem for inner-product spaces (which can have an uncountable infinity of basis vectors). Informally, the identity asserts that the sum of squares of the Fourier coefficients of a function is equal to the integral of the square of the function, where the Fourier coefficients c n {\displaystyle c_{n}} of f {\displaystyle f} are given by More formally, the result holds as stated provided f {\displaystyle f} is a square-integrable function or, more generally, in Lp space L 2 . {\displaystyle L^{2}.}A similar result is the Plancherel theorem, which asserts that the integral of the square of the Fourier transform of a function is equal to the integral of the square of the function itself. In one-dimension, for f ∈ L 2 ( R ) , {\displaystyle f\in L^{2}(\mathbb {R} ),} Another similar identity is a one which gives the integral of the fourth power of the function f ∈ L 4 {\displaystyle f\in L^{4}} in terms of its Fourier coefficients given f {\displaystyle f} has a finite-length discrete Fourier transform with M {\displaystyle M} number of coefficients c ∈ C {\displaystyle c\in \mathbb {C} } . if c ∈ R {\displaystyle c\in \mathbb {R} } the identity is simplified to | Wiki-STEM-kaggle |
Rademacher's theorem
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of Rn and f: U → Rm is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives. | Wiki-STEM-kaggle |
Strichartz estimate
In mathematical analysis, Strichartz estimates are a family of inequalities for linear dispersive partial differential equations. These inequalities establish size and decay of solutions in mixed norm Lebesgue spaces. They were first noted by Robert Strichartz and arose out of connections to the Fourier restriction problem. | Wiki-STEM-kaggle |
Tannery's theorem
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations. It is named after Jules Tannery. | Wiki-STEM-kaggle |
Trudinger's theorem
In mathematical analysis, Trudinger's theorem or the Trudinger inequality (also sometimes called the Moser–Trudinger inequality) is a result of functional analysis on Sobolev spaces. It is named after Neil Trudinger (and Jürgen Moser). It provides an inequality between a certain Sobolev space norm and an Orlicz space norm of a function.The inequality is a limiting case of Sobolev imbedding and can be stated as the following theorem: Let Ω {\displaystyle \Omega } be a bounded domain in R n {\displaystyle \mathbb {R} ^{n}} satisfying the cone condition. Let m p = n {\displaystyle mp=n} and p > 1 {\displaystyle p>1} . Set A ( t ) = exp ( t n / ( n − m ) ) − 1.{\displaystyle A(t)=\exp \left(t^{n/(n-m)}\right)-1.} Then there exists the embedding W m , p ( Ω ) ↪ L A ( Ω ) {\displaystyle W^{m,p}(\Omega )\hookrightarrow L_{A}(\Omega )} where L A ( Ω ) = { u ∈ M f ( Ω ): ‖ u ‖ A , Ω = inf { k > 0: ∫ Ω A ( | u ( x ) | k ) d x ≤ 1 } < ∞ } . {\displaystyle L_{A}(\Omega )=\left\{u\in M_{f}(\Omega ):\|u\|_{A,\Omega }=\inf\{k>0:\int _{\Omega }A\left({\frac {|u(x)|}{k}}\right)~dx\leq 1\}<\infty \right\}.} The space L A ( Ω ) {\displaystyle L_{A}(\Omega )} is an example of an Orlicz space. | Wiki-STEM-kaggle |
Wiener's Tauberian theorem
In mathematical analysis, Wiener's tauberian theorem is any of several related results proved by Norbert Wiener in 1932. They provide a necessary and sufficient condition under which any function in L 1 {\displaystyle L^{1}} or L 2 {\displaystyle L^{2}} can be approximated by linear combinations of translations of a given function.Informally, if the Fourier transform of a function f {\displaystyle f} vanishes on a certain set Z {\displaystyle Z} , the Fourier transform of any linear combination of translations of f {\displaystyle f} also vanishes on Z {\displaystyle Z} . Therefore, the linear combinations of translations of f {\displaystyle f} cannot approximate a function whose Fourier transform does not vanish on Z {\displaystyle Z} . Wiener's theorems make this precise, stating that linear combinations of translations of f {\displaystyle f} are dense if and only if the zero set of the Fourier transform of f {\displaystyle f} is empty (in the case of L 1 {\displaystyle L^{1}} ) or of Lebesgue measure zero (in the case of L 2 {\displaystyle L^{2}} ). Gelfand reformulated Wiener's theorem in terms of commutative C*-algebras, when it states that the spectrum of the L 1 {\displaystyle L^{1}} group ring L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} of the group R {\displaystyle \mathbb {R} } of real numbers is the dual group of R {\displaystyle \mathbb {R} } . A similar result is true when R {\displaystyle \mathbb {R} } is replaced by any locally compact abelian group. | Wiki-STEM-kaggle |
Zorich's theorem
In mathematical analysis, Zorich's theorem was proved by Vladimir A. Zorich in 1967. The result was conjectured by M. A. Lavrentev in 1938. | Wiki-STEM-kaggle |
Banach limit
In mathematical analysis, a Banach limit is a continuous linear functional ϕ: ℓ ∞ → C {\displaystyle \phi :\ell ^{\infty }\to \mathbb {C} } defined on the Banach space ℓ ∞ {\displaystyle \ell ^{\infty }} of all bounded complex-valued sequences such that for all sequences x = ( x n ) {\displaystyle x=(x_{n})} , y = ( y n ) {\displaystyle y=(y_{n})} in ℓ ∞ {\displaystyle \ell ^{\infty }} , and complex numbers α {\displaystyle \alpha }: ϕ ( α x + y ) = α ϕ ( x ) + ϕ ( y ) {\displaystyle \phi (\alpha x+y)=\alpha \phi (x)+\phi (y)} (linearity); if x n ≥ 0 {\displaystyle x_{n}\geq 0} for all n ∈ N {\displaystyle n\in \mathbb {N} } , then ϕ ( x ) ≥ 0 {\displaystyle \phi (x)\geq 0} (positivity); ϕ ( x ) = ϕ ( S x ) {\displaystyle \phi (x)=\phi (Sx)} , where S {\displaystyle S} is the shift operator defined by ( S x ) n = x n + 1 {\displaystyle (Sx)_{n}=x_{n+1}} (shift-invariance); if x {\displaystyle x} is a convergent sequence, then ϕ ( x ) = lim x {\displaystyle \phi (x)=\lim x} .Hence, ϕ {\displaystyle \phi } is an extension of the continuous functional lim: c → C {\displaystyle \lim :c\to \mathbb {C} } where c ⊂ ℓ ∞ {\displaystyle c\subset \ell ^{\infty }} is the complex vector space of all sequences which converge to a (usual) limit in C {\displaystyle \mathbb {C} } . In other words, a Banach limit extends the usual limits, is linear, shift-invariant and positive. However, there exist sequences for which the values of two Banach limits do not agree. We say that the Banach limit is not uniquely determined in this case. | Wiki-STEM-kaggle |
Banach limit
As a consequence of the above properties, a real-valued Banach limit also satisfies: lim inf n → ∞ x n ≤ ϕ ( x ) ≤ lim sup n → ∞ x n . {\displaystyle \liminf _{n\to \infty }x_{n}\leq \phi (x)\leq \limsup _{n\to \infty }x_{n}.} The existence of Banach limits is usually proved using the Hahn–Banach theorem (analyst's approach), or using ultrafilters (this approach is more frequent in set-theoretical expositions). These proofs necessarily use the axiom of choice (so called non-effective proof). | Wiki-STEM-kaggle |
Besicovitch covering theorem
In mathematical analysis, a Besicovitch cover, named after Abram Samoilovitch Besicovitch, is an open cover of a subset E of the Euclidean space RN by balls such that each point of E is the center of some ball in the cover. The Besicovitch covering theorem asserts that there exists a constant cN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there are cN subcollections of balls A1 = {Bn1}, …, AcN = {BncN} contained in F such that each collection Ai consists of disjoint balls, and E ⊆ ⋃ i = 1 c N ⋃ B ∈ A i B . {\displaystyle E\subseteq \bigcup _{i=1}^{c_{N}}\bigcup _{B\in A_{i}}B.}Let G denote the subcollection of F consisting of all balls from the cN disjoint families A1,...,AcN. The less precise following statement is clearly true: every point x ∈ RN belongs to at most cN different balls from the subcollection G, and G remains a cover for E (every point y ∈ E belongs to at least one ball from the subcollection G). This property gives actually an equivalent form for the theorem (except for the value of the constant). There exists a constant bN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there is a subcollection G of F such that G is a cover of the set E and every point x ∈ E belongs to at most bN different balls from the subcover G.In other words, the function SG equal to the sum of the indicator functions of the balls in G is larger than 1E and bounded on RN by the constant bN, 1 E ≤ S G := ∑ B ∈ G 1 B ≤ b N . {\displaystyle \mathbf {1} _{E}\leq S_{\mathbf {G} }:=\sum _{B\in \mathbf {G} }\mathbf {1} _{B}\leq b_{N}.} | Wiki-STEM-kaggle |
Contraction semigroup
In mathematical analysis, a C0-semigroup Γ(t), t ≥ 0, is called a quasicontraction semigroup if there is a constant ω such that ||Γ(t)|| ≤ exp(ωt) for all t ≥ 0. Γ(t) is called a contraction semigroup if ||Γ(t)|| ≤ 1 for all t ≥ 0. | Wiki-STEM-kaggle |
Carathéodory function
In mathematical analysis, a Carathéodory function (or Carathéodory integrand) is a multivariable function that allows us to solve the following problem effectively: A composition of two Lebesgue-measurable functions does not have to be Lebesgue-measurable as well. Nevertheless, a composition of a measurable function with a continuous function is indeed Lebesgue-measurable, but in many situations, continuity is a too restrictive assumption. Carathéodory functions are more general than continuous functions, but still allow a composition with Lebesgue-measurable function to be measurable. Carathéodory functions play a significant role in calculus of variation, and it is named after the Greek mathematician Constantin Carathéodory. | Wiki-STEM-kaggle |
Hermitian function
In mathematical analysis, a Hermitian function is a complex function with the property that its complex conjugate is equal to the original function with the variable changed in sign: f ∗ ( x ) = f ( − x ) {\displaystyle f^{*}(x)=f(-x)} (where the ∗ {\displaystyle ^{*}} indicates the complex conjugate) for all x {\displaystyle x} in the domain of f {\displaystyle f} . In physics, this property is referred to as PT symmetry. This definition extends also to functions of two or more variables, e.g., in the case that f {\displaystyle f} is a function of two variables it is Hermitian if f ∗ ( x 1 , x 2 ) = f ( − x 1 , − x 2 ) {\displaystyle f^{*}(x_{1},x_{2})=f(-x_{1},-x_{2})} for all pairs ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} in the domain of f {\displaystyle f} . From this definition it follows immediately that: f {\displaystyle f} is a Hermitian function if and only if the real part of f {\displaystyle f} is an even function, the imaginary part of f {\displaystyle f} is an odd function. | Wiki-STEM-kaggle |
Young measure
In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization (or optimal control problems). They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942. | Wiki-STEM-kaggle |
Bounded domain
In mathematical analysis, a domain or region is a non-empty connected open set in a topological space, in particular any non-empty connected open subset of the real coordinate space Rn or the complex coordinate space Cn. A connected open subset of coordinate space is frequently used for the domain of a function, but in general, functions may be defined on sets that are not topological spaces. The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the term domain, some use the term region, some use both terms interchangeably, and some define the two terms slightly differently; some avoid ambiguity by sticking with a phrase such as non-empty connected open subset. | Wiki-STEM-kaggle |
Equicontinuous linear maps
In mathematical analysis, a family of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood, in a precise sense described herein. In particular, the concept applies to countable families, and thus sequences of functions. Equicontinuity appears in the formulation of Ascoli's theorem, which states that a subset of C(X), the space of continuous functions on a compact Hausdorff space X, is compact if and only if it is closed, pointwise bounded and equicontinuous.As a corollary, a sequence in C(X) is uniformly convergent if and only if it is equicontinuous and converges pointwise to a function (not necessarily continuous a-priori). In particular, the limit of an equicontinuous pointwise convergent sequence of continuous functions fn on either metric space or locally compact space is continuous. If, in addition, fn are holomorphic, then the limit is also holomorphic. The uniform boundedness principle states that a pointwise bounded family of continuous linear operators between Banach spaces is equicontinuous. | Wiki-STEM-kaggle |
Function of bounded variation
In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function (which is a hypersurface in this case), but can be every intersection of the graph itself with a hyperplane (in the case of functions of two variables, a plane) parallel to a fixed x-axis and to the y-axis. Functions of bounded variation are precisely those with respect to which one may find Riemann–Stieltjes integrals of all continuous functions.Another characterization states that the functions of bounded variation on a compact interval are exactly those f which can be written as a difference g − h, where both g and h are bounded monotone. In particular, a BV function may have discontinuities, but at most countably many.In the case of several variables, a function f defined on an open subset Ω of R n {\displaystyle \mathbb {R} ^{n}} is said to have bounded variation if its distributional derivative is a vector-valued finite Radon measure. One of the most important aspects of functions of bounded variation is that they form an algebra of discontinuous functions whose first derivative exists almost everywhere: due to this fact, they can and frequently are used to define generalized solutions of nonlinear problems involving functionals, ordinary and partial differential equations in mathematics, physics and engineering. We have the following chains of inclusions for continuous functions over a closed, bounded interval of the real line: Continuously differentiable ⊆ Lipschitz continuous ⊆ absolutely continuous ⊆ continuous and bounded variation ⊆ differentiable almost everywhere | Wiki-STEM-kaggle |
Metric differential
In mathematical analysis, a metric differential is a generalization of a derivative for a Lipschitz continuous function defined on a Euclidean space and taking values in an arbitrary metric space. With this definition of a derivative, one can generalize Rademacher's theorem to metric space-valued Lipschitz functions. | Wiki-STEM-kaggle |
Complete metric space
In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. 2 {\displaystyle {\sqrt {2}}} is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. | Wiki-STEM-kaggle |
Modulus of continuity
In mathematical analysis, a modulus of continuity is a function ω: → used to measure quantitatively the uniform continuity of functions. So, a function f: I → R admits ω as a modulus of continuity if and only if | f ( x ) − f ( y ) | ≤ ω ( | x − y | ) , {\displaystyle |f(x)-f(y)|\leq \omega (|x-y|),} for all x and y in the domain of f. Since moduli of continuity are required to be infinitesimal at 0, a function turns out to be uniformly continuous if and only if it admits a modulus of continuity. Moreover, relevance to the notion is given by the fact that sets of functions sharing the same modulus of continuity are exactly equicontinuous families. For instance, the modulus ω(t) := kt describes the k-Lipschitz functions, the moduli ω(t) := ktα describe the Hölder continuity, the modulus ω(t) := kt(|log t|+1) describes the almost Lipschitz class, and so on.In general, the role of ω is to fix some explicit functional dependence of ε on δ in the (ε, δ) definition of uniform continuity. The same notions generalize naturally to functions between metric spaces. Moreover, a suitable local version of these notions allows to describe quantitatively the continuity at a point in terms of moduli of continuity. | Wiki-STEM-kaggle |
Modulus of continuity
A special role is played by concave moduli of continuity, especially in connection with extension properties, and with approximation of uniformly continuous functions. For a function between metric spaces, it is equivalent to admit a modulus of continuity that is either concave, or subadditive, or uniformly continuous, or sublinear (in the sense of growth). Actually, the existence of such special moduli of continuity for a uniformly continuous function is always ensured whenever the domain is either a compact, or a convex subset of a normed space. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios d Y ( f ( x ) , f ( x ′ ) ) d X ( x , x ′ ) {\displaystyle {\frac {d_{Y}(f(x),f(x'))}{d_{X}(x,x')}}} are uniformly bounded for all pairs (x, x′) bounded away from the diagonal of X x X. The functions with the latter property constitute a special subclass of the uniformly continuous functions, that in the following we refer to as the special uniformly continuous functions. Real-valued special uniformly continuous functions on the metric space X can also be characterized as the set of all functions that are restrictions to X of uniformly continuous functions over any normed space isometrically containing X. Also, it can be characterized as the uniform closure of the Lipschitz functions on X. | Wiki-STEM-kaggle |
Measure zero
In mathematical analysis, a null set is a Lebesgue measurable set of real numbers that has measure zero. This can be characterized as a set that can be covered by a countable union of intervals of arbitrarily small total length. The notion of null set should not be confused with the empty set as defined in set theory.Although the empty set has Lebesgue measure zero, there are also non-empty sets which are null. For example, any non-empty countable set of real numbers has Lebesgue measure zero and therefore is null. More generally, on a given measure space M = ( X , Σ , μ ) {\displaystyle M=(X,\Sigma ,\mu )} a null set is a set S ∈ Σ {\displaystyle S\in \Sigma } such that μ ( S ) = 0. {\displaystyle \mu (S)=0.} | Wiki-STEM-kaggle |
Positive invariant set
In mathematical analysis, a positively (or positive) invariant set is a set with the following properties: Suppose x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} is a dynamical system, x ( t , x 0 ) {\displaystyle x(t,x_{0})} is a trajectory, and x 0 {\displaystyle x_{0}} is the initial point. Let O := { x ∈ R n ∣ φ ( x ) = 0 } {\displaystyle {\mathcal {O}}:=\left\lbrace x\in \mathbb {R} ^{n}\mid \varphi (x)=0\right\rbrace } where φ {\displaystyle \varphi } is a real-valued function. The set O {\displaystyle {\mathcal {O}}} is said to be positively invariant if x 0 ∈ O {\displaystyle x_{0}\in {\mathcal {O}}} implies that x ( t , x 0 ) ∈ O ∀ t ≥ 0 {\displaystyle x(t,x_{0})\in {\mathcal {O}}\ \forall \ t\geq 0} In other words, once a trajectory of the system enters O {\displaystyle {\mathcal {O}}} , it will never leave it again. | Wiki-STEM-kaggle |
Strong measure zero set
In mathematical analysis, a strong measure zero set is a subset A of the real line with the following property: for every sequence (εn) of positive reals there exists a sequence (In) of intervals such that |In| < εn for all n and A is contained in the union of the In. (Here |In| denotes the length of the interval In.) Every countable set is a strong measure zero set, and so is every union of countably many strong measure zero sets. Every strong measure zero set has Lebesgue measure 0.The Cantor set is an example of an uncountable set of Lebesgue measure 0 which is not of strong measure zero.Borel's conjecture states that every strong measure zero set is countable. It is now known that this statement is independent of ZFC (the Zermelo–Fraenkel axioms of set theory, which is the standard axiom system assumed in mathematics). This means that Borel's conjecture can neither be proven nor disproven in ZFC (assuming ZFC is consistent).Sierpiński proved in 1928 that the continuum hypothesis (which is now also known to be independent of ZFC) implies the existence of uncountable strong measure zero sets. In 1976 Laver used a method of forcing to construct a model of ZFC in which Borel's conjecture holds. These two results together establish the independence of Borel's conjecture.The following characterization of strong measure zero sets was proved in 1973: A set A ⊆ R has strong measure zero if and only if A + M ≠ R for every meagre set M ⊆ R.This result establishes a connection to the notion of strongly meagre set, defined as follows: A set M ⊆ R is strongly meagre if and only if A + M ≠ R for every set A ⊆ R of Lebesgue measure zero.The dual Borel conjecture states that every strongly meagre set is countable. This statement is also independent of ZFC. == References == | Wiki-STEM-kaggle |
Thin set (analysis)
In mathematical analysis, a thin set is a subset of n-dimensional complex space Cn with the property that each point has a neighbourhood on which some non-zero holomorphic function vanishes. Since the set on which a holomorphic function vanishes is closed and has empty interior (by the Identity theorem), a thin set is nowhere dense, and the closure of a thin set is also thin. The fine topology was introduced in 1940 by Henri Cartan to aid in the study of thin sets. | Wiki-STEM-kaggle |
Improper integrals
In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals (or, equivalently, Darboux integrals), this typically involves unboundedness, either of the set over which the integral is taken or of the integrand (the function being integrated), or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge.If a regular definite integral (which may retronymically be called a proper integral) is worked out as if it is improper, the same answer will result. In the simplest case of a real-valued function of a single variable integrated in the sense of Riemann (or Darboux) over a single interval, improper integrals may be in any of the following forms: ∫ a ∞ f ( x ) d x {\displaystyle \int _{a}^{\infty }f(x)\,dx} ∫ − ∞ b f ( x ) d x {\displaystyle \int _{-\infty }^{b}f(x)\,dx} ∫ − ∞ ∞ f ( x ) d x {\displaystyle \int _{-\infty }^{\infty }f(x)\,dx} ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} , where f ( x ) {\displaystyle f(x)} is undefined or discontinuous somewhere on {\displaystyle } The first three forms are improper because the integrals are taken over an unbounded interval. (They may be improper for other reasons, as well, as explained below.) | Wiki-STEM-kaggle |
Improper integrals
Such an integral is sometimes described as being of the "first" type or kind if the integrand otherwise satisfies the assumptions of integration. Integrals in the fourth form that are improper because f ( x ) {\displaystyle f(x)} has a vertical asymptote somewhere on the interval {\displaystyle } may be described as being of the "second" type or kind. Integrals that combine aspects of both types are sometimes described as being of the "third" type or kind.In each case above, the improper integral must be rewritten using one or more limits, depending on what is causing the integral to be improper.For example, in case 1, if f ( x ) {\displaystyle f(x)} is continuous on the entire interval [ a , ∞ ) {\displaystyle [a,\infty )} , then ∫ a ∞ f ( x ) d x = lim b → ∞ ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{\infty }f(x)\,dx=\lim _{b\to \infty }\int _{a}^{b}f(x)\,dx.} The limit on the right is taken to be the definition of the integral notation on the left. | Wiki-STEM-kaggle |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 7