text
stringlengths 16
3.64k
| page_title
stringlengths 2
111
| source
stringclasses 1
value |
---|---|---|
call. they can then inspect the call's return value to determine their status, child or parent, and act accordingly. = = history = = one of the earliest references to a fork concept appeared in a multiprocessor system design by melvin conway, published in 1962. conway's paper motivated the implementation by l. peter deutsch of fork in the genie time - sharing system, where the concept was borrowed by ken thompson for its earliest appearance in research unix. fork later became a standard interface in posix. = = communication = = the child process starts off with a copy of its parent's file descriptors. for interprocess communication, the parent process will often create one or several pipes, and then after forking the processes will close the ends of the pipes that they do not need. = = variants = = = = = vfork = = = vfork is a variant of fork with the same calling convention and much the same semantics, but only to be used in restricted situations. it originated in the 3bsd version of unix, the first unix to support virtual memory. it was standardized by posix, which permitted vfork to have exactly the same behavior as fork, but was marked obsolescent in the 2004 edition and was replaced by posix _ spawn ( ) ( which is typically implemented via vfork ) in subsequent editions. when a vfork system call is issued, the parent process will be suspended until the child process has either completed execution or been replaced with a new executable image via one of the " exec " family of system calls. the child borrows the memory management unit setup from the parent and memory pages are shared among the parent and child process with no copying done, and in particular with no copy - on - write semantics ; hence, if the child process makes a modification in any of the shared pages, no new page will be created and the modified pages are visible to the parent process too. since there is absolutely no page copying involved ( consuming additional memory ), this technique is an optimization over plain fork in full - copy environments when used with exec. in posix, using vfork for any purpose except as a prelude to an immediate call to a function from the exec family ( and a select few other operations ) gives rise to undefined behavior. as with vfork, the child borrows data structures rather than copying them. vfork is still faster than a fork that uses
|
Fork (system call)
|
wikipedia
|
a function from the exec family ( and a select few other operations ) gives rise to undefined behavior. as with vfork, the child borrows data structures rather than copying them. vfork is still faster than a fork that uses copy on write semantics. system v did not support this function call before system vr4 was introduced, because the memory sharing that it causes is error - prone : vfork does not copy page tables so it is faster than the system v fork implementation. but the child process executes in the same physical address space as the parent process ( until an exec or exit ) and can thus overwrite the parent's data and stack. a dangerous situation could arise if a programmer uses vfork incorrectly, so the onus for calling vfork lies with the programmer. the difference between the system v approach and the bsd approach is philosophical : should the kernel hide idiosyncrasies of its implementation from users, or should it allow sophisticated users the opportunity to take advantage of the implementation to do a logical function more efficiently? similarly, the linux man page for vfork strongly discourages its use : it is rather unfortunate that linux revived this specter from the past. the bsd man page states : " this system call will be eliminated when proper system sharing mechanisms are implemented. users should not depend on the memory sharing semantics of vfork ( ) as it will, in that case, be made synonymous to fork ( 2 ). " other problems with vfork include deadlocks that might occur in multithreaded programs due to interactions with dynamic linking. as a replacement for the vfork interface, posix introduced the posix _ spawn family of functions that combine the actions of fork and exec. these functions may be implemented as library routines in terms of fork, as is done in linux, or in terms of vfork for better performance, as is done in solaris, but the posix specification notes that they were " designed as kernel operations ", especially for operating systems running on constrained hardware and real - time systems. while the 4. 4bsd implementation got rid of the vfork implementation, causing vfork to have the same behavior as fork, it was later reinstated in the netbsd operating system for performance reasons. some embedded operating systems such as uclinux omit fork and only implement vfork, because they need to operate on devices where copy - on - write is impossible to implement due
|
Fork (system call)
|
wikipedia
|
later reinstated in the netbsd operating system for performance reasons. some embedded operating systems such as uclinux omit fork and only implement vfork, because they need to operate on devices where copy - on - write is impossible to implement due to lack of a memory management unit. = = = rfork = = = the plan 9 operating system, created by the designers of unix, includes fork but also a variant called " rfork " that permits fine - grained sharing of resources between parent and child processes, including the address space ( except for a stack segment, which is unique to each process ), environment variables and the filesystem namespace ; this makes it a unified interface for the creation of both processes and threads within them. both freebsd and irix adopted the rfork system call from plan 9, the latter renaming it " sproc ". = = = clone = = = clone is a system call in the linux kernel that creates a child process that may share parts of its execution context with the parent. like freebsd's rfork and irix's sproc, linux's clone was inspired by plan 9's rfork and can be used to implement threads ( though application programmers will typically use a higher - level interface such as pthreads, implemented on top of clone ). the " separate stacks " feature from plan 9 and irix has been omitted because ( according to linus torvalds ) it causes too much overhead. = = forking in other operating systems = = in the original design of the vms operating system ( 1977 ), a copy operation with subsequent mutation of the content of a few specific addresses for the new process as in forking was considered risky. errors in the current process state may be copied to a child process. here, the metaphor of process spawning is used : each component of the memory layout of the new process is newly constructed from scratch. the spawn metaphor was later adopted in microsoft operating systems ( 1993 ). the posix - compatibility component of vm / cms ( openextensions ) provides a very limited implementation of fork, in which the parent is suspended while the child executes, and the child and the parent share the same address space. this is essentially a vfork labelled as a fork. ( this applies to the cms guest operating system only ; other vm guest operating systems, such as linux, provide standard fork functionality. ) = = application usage = = the
|
Fork (system call)
|
wikipedia
|
space. this is essentially a vfork labelled as a fork. ( this applies to the cms guest operating system only ; other vm guest operating systems, such as linux, provide standard fork functionality. ) = = application usage = = the following variant of the " hello, world! " program demonstrates the mechanics of the fork system call in the c programming language. the program forks into two processes, each deciding what functionality they perform based on the return value of the fork system call. boilerplate code such as header inclusions has been omitted. what follows is a dissection of this program. the first statement in main calls the fork system call to split execution into two processes. the return value of fork is recorded in a variable of type pid _ t, which is the posix type for process identifiers ( pids ). minus one indicates an error in fork : no new process was created, so an error message is printed. if fork was successful, then there are now two processes, both executing the main function from the point where fork has returned. to make the processes perform different tasks, the program must branch on the return value of fork to determine whether it is executing as the child process or the parent process. in the child process, the return value appears as zero ( which is an invalid process identifier ). the child process prints the desired greeting message, then exits. ( for technical reasons, the posix _ exit function must be used here instead of the c standard exit function. ) the other process, the parent, receives from fork the process identifier of the child, which is always a positive number. the parent process passes this identifier to the waitpid system call to suspend execution until the child has exited. when this has happened, the parent resumes execution and exits by means of the return statement. = = see also = = fork bomb fork – exec exit ( system call ) spawn ( computing ) wait ( system call ) = = references = =
|
Fork (system call)
|
wikipedia
|
a discourse marker is a word or a phrase that plays a role in managing the flow and structure of discourse. since their main function is at the level of discourse ( sequences of utterances ) rather than at the level of utterances or sentences, discourse markers are relatively syntax - independent and usually do not change the truth conditional meaning of the sentence. they can also indicate what a speaker is doing on a variety of different planes. examples of discourse markers include the particles oh, well, now, then, you know, and i mean, and the discourse connectives so, because, and, but, and or. the term discourse marker was popularized by deborah schiffrin in her 1987 book discourse markers. = = usage in english = = common discourse markers used in the english language include you know, actually, basically, like, i mean, okay and so. discourse markers come from varied word classes, such as adverbs ( well ) or prepositional phrases ( in fact ). the process that leads from a free construction to a discourse marker can be traced back through grammaticalization studies and resources. discourse markers can be seen as a “ joint product ” of grammaticalization and cooption, explaining both their grammatical behavior and their metatextual properties. traditionally, some of the words or phrases that were considered discourse markers were treated as fillers or expletives : words or phrases that had no function at all. now they are assigned functions in different levels of analysis : topic changes, reformulations, discourse planning, stressing, hedging, or backchanneling. yael maschler divided discourse markers into four broad categories : interpersonal, referential, structural, and cognitive. interpersonal markers are used to indicate the relationship between the speaker and the listener. perception : look, believe me agreement : exactly, or disagreement : i'm not sure amazement : wow referential markers, usually conjunctions, are used to indicate the sequence, causality, and coordination between statements. sequence : now, then causality : because coordination : and, or non - coordination : but structural markers indicate the hierarchy of conversational actions at the time in which they are spoken. these markers indicate which statements the speaker believes to be most or least important. organization : first of all introduction : so summarization : in the end cognitive markers reveal the speaker's thought process processing information : uhh realization : oh! rephrasing : i mean in her book on discourse analysis, barbara johnstone called discourse markers that are used by speakers
|
Discourse marker
|
wikipedia
|
: so summarization : in the end cognitive markers reveal the speaker's thought process processing information : uhh realization : oh! rephrasing : i mean in her book on discourse analysis, barbara johnstone called discourse markers that are used by speakers to take the floor ( like so ) " boundarymarking uses " of the word. this use of discourse markers is present and important in both monologue and dialogue situations. = = examples in other languages = = another example of an interpersonal discourse marker is the yiddish marker nu, also used in modern hebrew and other languages, often to convey impatience or to urge the listener to act ( cf. german cognate nun, meaning'now'in the sense of'at the moment being discussed ', but contrast latin etymological cognate nunc, meaning'now'in the sense of'at the moment in which discussion is occurring'; latin used iam for'at the moment being discussed'( and many other meanings ) and german uses jetzt for'at the moment in which discussion is occurring'). the french phrase a propos can indicate'a smooth or a more abrupt discourse shift.'= = see also = = filler ( linguistics ) so ( word ) speech disfluency tag question interjection = = notes = = = = further reading = = hansen, maj - britt mosegaard. 1998. the semantic status of discourse markers. lingua 104 ( 3 – 4 ), 235 – 260. brown, benjamin ( 2014 ). "'but me no buts': the theological debate between the hasidim and the mitnagdim in light of the discourse - markers theory ". numen. 61 ( 5 – 6 ) : 525 – 551. doi : 10. 1163 / 15685276 - 12341341. brown, benjamin ( 2014 ). "'some say this, some say that': pragmatics and discourse markers in yad malachi's interpretation rules ". language and law. 3 : 1 – 20.
|
Discourse marker
|
wikipedia
|
kinesiogenomics refers to the study of genetics in the various disciplines of the field of kinesiology, the study of human movement. the field has also been referred to as " exercise genomics " or " exercisenomics. " areas of study within kinesiogenomics include the role of gene sequence variation ( i. e., alleles ) in sport performance, identification of genes ( and their different alleles ) that contribute to the response and adaptation of the body's tissue systems ( e. g., muscles, heart, metabolism, etc. ) to various exercise - related stimuli, the use of genetic testing to predict sport performance or individualize exercise prescription, and gene doping, the potential for genetic therapy to be used to enhance sport performance. the field of kinesiogenomics is relatively new, though two books have outlined basic concepts. a regularly published review article entitled, " the human gene map for performance and health - related fitness phenotypes, " describes the genes that have been studied in relation to specific exercise - and fitness - related traits. the most recent ( seventh ) update was published in 2009. = = research = = within the field of kinesiogenomics, several research studies have been conducted in recent years. this increase in research has led to advancements of knowledge in associating how genes and gene sequencing effects a person's exercise habits and health. one study focusing on twins looked to see the effect of genes on exercise ability, the effects of exercise on mood, and the ability to lose weight. the research concluded that genetics had a significant impact of the likelihood an individual would participate in exercise. an increase in participation can be linked to personality factors such as self - motivation and self - discipline, while a lower participation in exercise can be influenced by factors such as anxiety and depression. these personality trait, both positive and negative, can be associated to one's genetic makeup. = = references = =
|
Kinesiogenomics
|
wikipedia
|
comparative linguistics is a branch of historical linguistics that is concerned with comparing languages to establish their historical relatedness. genetic relatedness implies a common origin or proto - language and comparative linguistics aims to construct language families, to reconstruct proto - languages and specify the changes that have resulted in the documented languages. to maintain a clear distinction between attested and reconstructed forms, comparative linguists prefix an asterisk to any form that is not found in surviving texts. a number of methods for carrying out language classification have been developed, ranging from simple inspection to computerised hypothesis testing. such methods have gone through a long process of development. = = methods = = the fundamental technique of comparative linguistics is to compare phonological systems, morphological systems, syntax and the lexicon of two or more languages using techniques such as the comparative method. in principle, every difference between two related languages should be explicable to a high degree of plausibility ; systematic changes, for example in phonological or morphological systems are expected to be highly regular ( consistent ). in practice, the comparison may be more restricted, e. g. just to the lexicon. in some methods it may be possible to reconstruct an earlier proto - language. although the proto - languages reconstructed by the comparative method are hypothetical, a reconstruction may have predictive power. the most notable example of this is ferdinand de saussure's proposal that the indo - european consonant system contained laryngeals, a type of consonant attested in no indo - european language known at the time. the hypothesis was vindicated with the discovery of hittite, which proved to have exactly the consonants saussure had hypothesized in the environments he had predicted. where languages are derived from a very distant ancestor, and are thus more distantly related, the comparative method becomes less practicable. in particular, attempting to relate two reconstructed proto - languages by the comparative method has not generally produced results that have met with wide acceptance. the method has also not been very good at unambiguously identifying sub - families ; thus, different scholars have produced conflicting results, for example in indo - european. a number of methods based on statistical analysis of vocabulary have been developed to try and overcome this limitation, such as lexicostatistics and mass comparison. the former uses lexical cognates like the comparative method, while the latter uses only lexical similarity. the theoretical basis of such methods is that vocabulary items can be matched without a detailed language reconstruction and that comparing enough vocabulary
|
Comparative linguistics
|
wikipedia
|
##s and mass comparison. the former uses lexical cognates like the comparative method, while the latter uses only lexical similarity. the theoretical basis of such methods is that vocabulary items can be matched without a detailed language reconstruction and that comparing enough vocabulary items will negate individual inaccuracies ; thus, they can be used to determine relatedness but not to determine the proto - language. = = history = = the earliest method of this type was the comparative method, which was developed over many years, culminating in the nineteenth century. this uses a long word list and detailed study. however, it has been criticized for example as subjective, informal, and lacking testability. the comparative method uses information from two or more languages and allows reconstruction of the ancestral language. the method of internal reconstruction uses only a single language, with comparison of word variants, to perform the same function. internal reconstruction is more resistant to interference but usually has a limited available base of utilizable words and is able to reconstruct only certain changes ( those that have left traces as morphophonological variations ). in the twentieth century an alternative method, lexicostatistics, was developed, which is mainly associated with morris swadesh but is based on earlier work. this uses a short word list of basic vocabulary in the various languages for comparisons. swadesh used 100 ( earlier 200 ) items that are assumed to be cognate ( on the basis of phonetic similarity ) in the languages being compared, though other lists have also been used. distance measures are derived by examination of language pairs but such methods reduce the information. an outgrowth of lexicostatistics is glottochronology, initially developed in the 1950s, which proposed a mathematical formula for establishing the date when two languages separated, based on percentage of a core vocabulary of culturally independent words. in its simplest form a constant rate of change is assumed, though later versions allow variance but still fail to achieve reliability. glottochronology has met with mounting scepticism, and is seldom applied today. dating estimates can now be generated by computerised methods that have fewer restrictions, calculating rates from the data. however, no mathematical means of producing proto - language split - times on the basis of lexical retention has been proven reliable. another controversial method, developed by joseph greenberg, is mass comparison. the method, which disavows any ability to date developments, aims simply to show which languages are more and less close to each other. greenberg suggested that
|
Comparative linguistics
|
wikipedia
|
has been proven reliable. another controversial method, developed by joseph greenberg, is mass comparison. the method, which disavows any ability to date developments, aims simply to show which languages are more and less close to each other. greenberg suggested that the method is useful for preliminary grouping of languages known to be related as a first step toward more in - depth comparative analysis. however, since mass comparison eschews the establishment of regular changes, it is flatly rejected by the majority of historical linguists. recently, computerised statistical hypothesis testing methods have been developed which are related to both the comparative method and lexicostatistics. character based methods are similar to the former and distanced based methods are similar to the latter ( see quantitative comparative linguistics ). the characters used can be morphological or grammatical as well as lexical. since the mid - 1990s these more sophisticated tree - and network - based phylogenetic methods have been used to investigate the relationships between languages and to determine approximate dates for proto - languages. these are considered by many to show promise but are not wholly accepted by traditionalists. however, they are not intended to replace older methods but to supplement them. such statistical methods cannot be used to derive the features of a proto - language, apart from the fact of the existence of shared items of the compared vocabulary. these approaches have been challenged for their methodological problems, since without a reconstruction or at least a detailed list of phonological correspondences there can be no demonstration that two words in different languages are cognate. = = related fields = = there are other branches of linguistics that involve comparing languages, which are not, however, part of comparative linguistics : linguistic typology compares languages to classify them by their features. its ultimate aim is to understand the universals that govern language, and the range of types found in the world's languages in respect of any particular feature ( word order or vowel system, for example ). typological similarity does not imply a historical relationship. however, typological arguments can be used in comparative linguistics : one reconstruction may be preferred to another as typologically more plausible. contact linguistics examines the linguistic results of contact between the speakers of different languages, particularly as evidenced in loan words. an empirical study of loans is by definition historical in focus and therefore forms part of the subject matter of historical linguistics. one of the goals of etymology is to establish which items in a language's vocabulary result from linguistic contact. this is also an important issue both for the comparative method and for the lexical
|
Comparative linguistics
|
wikipedia
|
and therefore forms part of the subject matter of historical linguistics. one of the goals of etymology is to establish which items in a language's vocabulary result from linguistic contact. this is also an important issue both for the comparative method and for the lexical comparison methods, since failure to recognize a loan may distort the findings. contrastive linguistics compares languages usually with the aim of assisting language learning by identifying important differences between the learner's native and target languages. contrastive linguistics deals solely with present - day languages. = = pseudolinguistic comparisons = = comparative linguistics includes the study of the historical relationships of languages using the comparative method to search for regular ( i. e., recurring ) correspondences between the languages'phonology, grammar, and core vocabulary, and through hypothesis testing, which involves examining specific patterns of similarity and difference across languages ; some persons with little or no specialization in the field sometimes attempt to establish historical associations between languages by noting similarities between them, in a way that is considered pseudoscientific by specialists ( e. g. spurious comparisons between ancient egyptian and languages like wolof, as proposed by diop in the 1960s ). the most common method applied in pseudoscientific language comparisons is to search two or more languages for words that seem similar in their sound and meaning. while similarities of this kind often seem convincing to laypersons, linguistic scientists consider this kind of comparison to be unreliable for two primary reasons. first, the method applied is not well - defined : the criterion of similarity is subjective and thus not subject to verification or falsification, which is contrary to the principles of the scientific method. second, the large size of all languages'vocabulary and a relatively limited inventory of articulated sounds used by most languages makes it easy to find coincidentally similar words between languages. there are sometimes political or religious reasons for associating languages in ways that some linguists would dispute. for example, it has been suggested that the turanian or ural – altaic language group, which relates sami and other languages to the mongolian language, was used to justify racism towards the sami in particular. there are also strong, albeit areal not genetic, similarities between the uralic and altaic languages which provided an innocent basis for this theory. in 1930s turkey, some promoted the sun language theory, one that showed that turkic languages were close to the original language. some believers in abrahamic religions try to derive their native languages from classical hebrew, as herbert w. armstrong, a proponent of british israelis
|
Comparative linguistics
|
wikipedia
|
1930s turkey, some promoted the sun language theory, one that showed that turkic languages were close to the original language. some believers in abrahamic religions try to derive their native languages from classical hebrew, as herbert w. armstrong, a proponent of british israelism, who said that the word british comes from hebrew brit meaning'covenant'and ish meaning'man ', supposedly proving that the british people are the'covenant people'of god. and lithuanian - american archaeologist marija gimbutas argued during the mid - 1900s that basque is clearly related to the extinct pictish and etruscan languages, in attempt to show that basque was a remnant of an " old european culture ". in the dissertatio de origine gentium americanarum ( 1625 ), the dutch lawyer hugo grotius " proves " that the american indians ( mohawks ) speak a language ( lingua maquaasiorum ) derived from scandinavian languages ( grotius was on sweden's payroll ), supporting swedish colonial pretensions in america. the dutch doctor johannes goropius becanus, in his origines antverpiana ( 1580 ) admits quis est enim qui non amet patrium sermonem ( " who does not love his fathers'language? " ), whilst asserting that hebrew is derived from dutch. the frenchman eloi johanneau claimed in 1818 ( melanges d'origines etymologiques et de questions grammaticales ) that the celtic language is the oldest, and the mother of all others. in 1759, joseph de guignes theorized ( memoire dans lequel on prouve que les chinois sont une colonie egyptienne ) that the chinese and egyptians were related, the former being a colony of the latter. in 1885, edward tregear ( the aryan maori ) compared the maori and " aryan " languages. jean prat, in his 1941 les langues nitales, claimed that the bantu languages of africa are descended from latin, coining the french linguistic term nitale in doing so. just as egyptian is related to brabantic, following becanus in his hieroglyphica, still using comparative methods. the first practitioners of comparative linguistics were not universally acclaimed : upon reading becanus'book, scaliger wrote, " never did i read greater nonsense ", and leibniz coined the term goropism ( from goropius ) to designate a far - sought, ridiculous etymology. there have also been assertion
|
Comparative linguistics
|
wikipedia
|
##canus'book, scaliger wrote, " never did i read greater nonsense ", and leibniz coined the term goropism ( from goropius ) to designate a far - sought, ridiculous etymology. there have also been assertions that humans are descended from non - primate animals, with the use of the voice being the primary basis for comparison. jean - pierre brisset ( in la grande nouvelle, around 1900 ) believed and claimed that humans evolved from frogs through linguistic connections, arguing that the croaking of frogs resembles spoken french. he suggested that the french word logement, meaning'dwelling,'originated from the word l'eau, which means'water.'= = see also = = comparative method comparative literature contrastive analysis contrastive linguistics glottochronology historical linguistics intercontinental dictionary series lexicostatistics mass comparison moscow school of comparative linguistics pseudoscientific language comparison quantitative comparative linguistics sound law = = references = = = = bibliography = = august schleicher : compendium der vergleichenden grammatik der indogermanischen sprachen. ( kurzer abriss der indogermanischen ursprache, des altindischen, altiranischen, altgriechischen, altitalischen, altkeltischen, altslawischen, litauischen und altdeutschen. ) ( 2 vols. ) weimar, h. boehlau ( 1861 / 62 ) ; reprinted by minerva gmbh, wissenschaftlicher verlag, isbn 3 - 8102 - 1071 - 4 karl brugmann, berthold delbruck, grundriss der vergleichenden grammatik der indogermanischen sprachen ( 1886 – 1916 ). raimo anttila, historical and comparative linguistics ( benjamins, 1989 ) isbn 90 - 272 - 3557 - 0 theodora bynon, historical linguistics ( cambridge university press, 1977 ) isbn 0 - 521 - 29188 - 7 richard d. janda and brian d. joseph ( eds ), the handbook of historical linguistics ( blackwell, 2004 ) isbn 1 - 4051 - 2747 - 3 giles, peter ; sievers, eduard ( 1911 ). " philology ". encyclopædia britannica. vol. 21 ( 11th ed. ). pp. 414 – 438. roger lass, historical linguistics and language change. ( cambridge university press, 1997 ) isbn 0 - 521 - 45
|
Comparative linguistics
|
wikipedia
|
. encyclopædia britannica. vol. 21 ( 11th ed. ). pp. 414 – 438. roger lass, historical linguistics and language change. ( cambridge university press, 1997 ) isbn 0 - 521 - 45924 - 9 winfred p. lehmann, historical linguistics : an introduction ( holt, 1962 ) isbn 0 - 03 - 011430 - 6 joseph salmons, bibliography of historical - comparative linguistics. oxford bibliographies online. r. l. trask ( ed. ), dictionary of historical and comparative linguistics ( fitzroy dearborn, 2001 ) isbn 1 - 57958 - 218 - 4
|
Comparative linguistics
|
wikipedia
|
inductive probability attempts to give the probability of future events based on past events. it is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. it is a source of knowledge about the world. there are three sources of knowledge : inference, communication, and deduction. communication relays information found using other methods. deduction establishes new facts based on existing facts. inference establishes new facts from data. its basis is bayes'theorem. information describing the world is written in a language. for example, a simple mathematical language of propositions may be chosen. sentences may be written down in this language as strings of characters. but in the computer it is possible to encode these sentences as strings of bits ( 1s and 0s ). then the language may be encoded so that the most commonly used sentences are the shortest. this internal language implicitly represents probabilities of statements. occam's razor says the " simplest theory, consistent with the data is most likely to be correct ". the " simplest theory " is interpreted as the representation of the theory written in this internal language. the theory with the shortest encoding in this internal language is most likely to be correct. = = history = = probability and statistics was focused on probability distributions and tests of significance. probability was formal, well defined, but limited in scope. in particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population. bayes's theorem is named after rev. thomas bayes 1701 – 1761. bayesian inference broadened the application of probability to many situations where a population was not well defined. but bayes'theorem always depended on prior probabilities, to generate new probabilities. it was unclear where these prior probabilities should come from. ray solomonoff developed algorithmic probability which gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964. chris wallace and d. m. boulton developed minimum message length circa 1968. later jorma rissanen developed the minimum description length circa 1978. these methods allow information theory to be related to probability, in a way that can be compared to the application of bayes'theorem, but which give a source and explanation for the role of prior probabilities. marcus hutter combined decision theory with the work of ray solomonoff and andrey kolmogorov to give a theory for the par
|
Inductive probability
|
wikipedia
|
the application of bayes'theorem, but which give a source and explanation for the role of prior probabilities. marcus hutter combined decision theory with the work of ray solomonoff and andrey kolmogorov to give a theory for the pareto optimal behavior for an intelligent agent, circa 1998. = = = minimum description / message length = = = the program with the shortest length that matches the data is the most likely to predict future data. this is the thesis behind the minimum message length and minimum description length methods. at first sight bayes'theorem appears different from the minimimum message / description length principle. at closer inspection it turns out to be the same. bayes'theorem is about conditional probabilities, and states the probability that event b happens if firstly event a happens : p ( a ∧ b ) = p ( b ) ⋅ p ( a | b ) = p ( a ) ⋅ p ( b | a ) { \ displaystyle p ( a \ land b ) = p ( b ) \ cdot p ( a | b ) = p ( a ) \ cdot p ( b | a ) } becomes in terms of message length l, l ( a ∧ b ) = l ( b ) + l ( a | b ) = l ( a ) + l ( b | a ). { \ displaystyle l ( a \ land b ) = l ( b ) + l ( a | b ) = l ( a ) + l ( b | a ). } this means that if all the information is given describing an event then the length of the information may be used to give the raw probability of the event. so if the information describing the occurrence of a is given, along with the information describing b given a, then all the information describing a and b has been given. = = = = overfitting = = = = overfitting occurs when the model matches the random noise and not the pattern in the data. for example, take the situation where a curve is fitted to a set of points. if a polynomial with many terms is fitted then it can more closely represent the data. then the fit will be better, and the information needed to describe the deviations from the fitted curve will be smaller. smaller information length means higher probability. however, the information needed to describe the curve must also be considered. the total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information
|
Inductive probability
|
wikipedia
|
means higher probability. however, the information needed to describe the curve must also be considered. the total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information to describe the polynomial. = = = inference based on program complexity = = = solomonoff's theory of inductive inference is also inductive inference. a bit string x is observed. then consider all programs that generate strings starting with x. cast in the form of inductive inference, the programs are theories that imply the observation of the bit string x. the method used here to give probabilities for inductive inference is based on solomonoff's theory of inductive inference. = = = = detecting patterns in the data = = = = if all the bits are 1, then people infer that there is a bias in the coin and that it is more likely also that the next bit is 1 also. this is described as learning from, or detecting a pattern in the data. such a pattern may be represented by a computer program. a short computer program may be written that produces a series of bits which are all 1. if the length of the program k is l ( k ) { \ displaystyle l ( k ) } bits then its prior probability is, p ( k ) = 2 − l ( k ) { \ displaystyle p ( k ) = 2 ^ { - l ( k ) } } the length of the shortest program that represents the string of bits is called the kolmogorov complexity. kolmogorov complexity is not computable. this is related to the halting problem. when searching for the shortest program some programs may go into an infinite loop. = = = = considering all theories = = = = the greek philosopher epicurus is quoted as saying " if more than one theory is consistent with the observations, keep all theories ". as in a crime novel all theories must be considered in determining the likely murderer, so with inductive probability all programs must be considered in determining the likely future bits arising from the stream of bits. programs that are already longer than n have no predictive power. the raw ( or prior ) probability that the pattern of bits is random ( has no pattern ) is 2 − n { \ displaystyle 2 ^ { - n } }. each program that produces the sequence of bits, but is shorter than the n is a theory / pattern about the bits with a probability of 2 −
|
Inductive probability
|
wikipedia
|
( has no pattern ) is 2 − n { \ displaystyle 2 ^ { - n } }. each program that produces the sequence of bits, but is shorter than the n is a theory / pattern about the bits with a probability of 2 − k { \ displaystyle 2 ^ { - k } } where k is the length of the program. the probability of receiving a sequence of bits y after receiving a series of bits x is then the conditional probability of receiving y given x, which is the probability of x with y appended, divided by the probability of x. = = = = universal priors = = = = the programming language affects the predictions of the next bit in the string. the language acts as a prior probability. this is particularly a problem where the programming language codes for numbers and other data types. intuitively we think that 0 and 1 are simple numbers, and that prime numbers are somehow more complex than numbers that may be composite. using the kolmogorov complexity gives an unbiased estimate ( a universal prior ) of the prior probability of a number. as a thought experiment an intelligent agent may be fitted with a data input device giving a series of numbers, after applying some transformation function to the raw numbers. another agent might have the same input device with a different transformation function. the agents do not see or know about these transformation functions. then there appears no rational basis for preferring one function over another. a universal prior insures that although two agents may have different initial probability distributions for the data input, the difference will be bounded by a constant. so universal priors do not eliminate an initial bias, but they reduce and limit it. whenever we describe an event in a language, either using a natural language or other, the language has encoded in it our prior expectations. so some reliance on prior probabilities are inevitable. a problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. this is the problem of bias or prejudice. universal priors reduce but do not eliminate this problem. = = = universal artificial intelligence = = = the theory of universal artificial intelligence applies decision theory to inductive probabilities. the theory shows how the best actions to optimize a reward function may be chosen. the result is a theoretical model of intelligence. it is a fundamental theory of intelligence, which optimizes the agents behavior in, exploring the environment ; performing actions to get responses that broaden the agents knowledge. competing or co - operating with
|
Inductive probability
|
wikipedia
|
chosen. the result is a theoretical model of intelligence. it is a fundamental theory of intelligence, which optimizes the agents behavior in, exploring the environment ; performing actions to get responses that broaden the agents knowledge. competing or co - operating with another agent ; games. balancing short and long term rewards. in general no agent will always provide the best actions in all situations. a particular choice made by an agent may be wrong, and the environment may provide no way for the agent to recover from an initial bad choice. however the agent is pareto optimal in the sense that no other agent will do better than this agent in this environment, without doing worse in another environment. no other agent may, in this sense, be said to be better. at present the theory is limited by incomputability ( the halting problem ). approximations may be used to avoid this. processing speed and combinatorial explosion remain the primary limiting factors for artificial intelligence. = = probability = = probability is the representation of uncertain or partial knowledge about the truth of statements. probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data. this description of probability may seem strange at first. in natural language we refer to " the probability " that the sun will rise tomorrow. we do not refer to " your probability " that the sun will rise. but in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities. probabilities are personal because they are conditional on the knowledge of the individual. probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. subjective should not be taken here to mean vague or undefined. the term intelligent agent is used to refer to the holder of the probabilities. the intelligent agent may be a human or a machine. if the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event. if however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. in this case optimal decision theory as in marcus hutter's universal artificial intelligence will give pareto optimal performance for the agent. this means that no other intelligent agent could do better in one environment without doing worse in another environment. = = = comparison to deductive probability
|
Inductive probability
|
wikipedia
|
decision theory as in marcus hutter's universal artificial intelligence will give pareto optimal performance for the agent. this means that no other intelligent agent could do better in one environment without doing worse in another environment. = = = comparison to deductive probability = = = in deductive probability theories, probabilities are absolutes, independent of the individual making the assessment. but deductive probabilities are based on, shared knowledge. assumed facts, that should be inferred from the data. for example, in a trial the participants are aware the outcome of all the previous history of trials. they also assume that each outcome is equally probable. together this allows a single unconditional value of probability to be defined. but in reality each individual does not have the same information. and in general the probability of each outcome is not equal. the dice may be loaded, and this loading needs to be inferred from the data. = = = probability as estimation = = = the principle of indifference has played a key role in probability theory. it says that if n statements are symmetric so that one condition cannot be preferred over another then all statements are equally probable. taken seriously, in evaluating probability this principle leads to contradictions. suppose there are 3 bags of gold in the distance and one is asked to select one. then because of the distance one cannot see the bag sizes. you estimate using the principle of indifference that each bag has equal amounts of gold, and each bag has one third of the gold. now, while one of us is not looking, the other takes one of the bags and divide it into 3 bags. now there are 5 bags of gold. the principle of indifference now says each bag has one fifth of the gold. a bag that was estimated to have one third of the gold is now estimated to have one fifth of the gold. taken as a value associated with the bag the values are different therefore contradictory. but taken as an estimate given under a particular scenario, both values are separate estimates given under different circumstances and there is no reason to believe they are equal. estimates of prior probabilities are particularly suspect. estimates will be constructed that do not follow any consistent frequency distribution. for this reason prior probabilities are considered as estimates of probabilities rather than probabilities. a full theoretical treatment would associate with each probability, the statement prior knowledge prior probabilities the estimation procedure used to give the probability. = = = combining probability approaches = = = inductive probability combines two different approaches to probability. probability and
|
Inductive probability
|
wikipedia
|
. a full theoretical treatment would associate with each probability, the statement prior knowledge prior probabilities the estimation procedure used to give the probability. = = = combining probability approaches = = = inductive probability combines two different approaches to probability. probability and information probability and frequency each approach gives a slightly different viewpoint. information theory is used in relating probabilities to quantities of information. this approach is often used in giving estimates of prior probabilities. frequentist probability defines probabilities as objective statements about how often an event occurs. this approach may be stretched by defining the trials to be over possible worlds. statements about possible worlds define events. = = probability and information = = whereas logic represents only two values ; true and false as the values of statement, probability associates a number in [ 0, 1 ] to each statement. if the probability of a statement is 0, the statement is false. if the probability of a statement is 1 the statement is true. in considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. therefore, each extra bit halves the probability of a sequence of bits. this leads to the conclusion that, p ( x ) = 2 − l ( x ) { \ displaystyle p ( x ) = 2 ^ { - l ( x ) } } where p ( x ) { \ displaystyle p ( x ) } is the probability of the string of bits x { \ displaystyle x } and l ( x ) { \ displaystyle l ( x ) } is its length. the prior probability of any statement is calculated from the number of bits needed to state it. see also information theory. = = = combining information = = = two statements a { \ displaystyle a } and b { \ displaystyle b } may be represented by two separate encodings. then the length of the encoding is, l ( a ∧ b ) = l ( a ) + l ( b ) { \ displaystyle l ( a \ land b ) = l ( a ) + l ( b ) } or in terms of probability, p ( a ∧ b ) = p ( a ) p ( b ) { \ displaystyle p ( a \ land b ) = p ( a ) p ( b ) } but this law is not always true because there may be a shorter method of encoding b { \ displaystyle b } if we assume a { \ displaystyle a }. so the above probability law applies only if a
|
Inductive probability
|
wikipedia
|
a ) p ( b ) } but this law is not always true because there may be a shorter method of encoding b { \ displaystyle b } if we assume a { \ displaystyle a }. so the above probability law applies only if a { \ displaystyle a } and b { \ displaystyle b } are " independent ". = = = the internal language of information = = = the primary use of the information approach to probability is to provide estimates of the complexity of statements. recall that occam's razor states that " all things being equal, the simplest theory is the most likely to be correct ". in order to apply this rule, first there needs to be a definition of what " simplest " means. information theory defines simplest to mean having the shortest encoding. knowledge is represented as statements. each statement is a boolean expression. expressions are encoded by a function that takes a description ( as against the value ) of the expression and encodes it as a bit string. the length of the encoding of a statement gives an estimate of the probability of a statement. this probability estimate will often be used as the prior probability of a statement. technically this estimate is not a probability because it is not constructed from a frequency distribution. the probability estimates given by it do not always obey the law of total of probability. applying the law of total probability to various scenarios will usually give a more accurate probability estimate of the prior probability than the estimate from the length of the statement. = = = = encoding expressions = = = = an expression is constructed from sub expressions, constants ( including function identifier ). application of functions. quantifiers. a huffman code must distinguish the 3 cases. the length of each code is based on the frequency of each type of sub expressions. initially constants are all assigned the same length / probability. later constants may be assigned a probability using the huffman code based on the number of uses of the function id in all expressions recorded so far. in using a huffman code the goal is to estimate probabilities, not to compress the data. the length of a function application is the length of the function identifier constant plus the sum of the sizes of the expressions for each parameter. the length of a quantifier is the length of the expression being quantified over. = = = = distribution of numbers = = = = no explicit representation of natural numbers is given. however natural numbers may be constructed by applying the successor function to 0, and then
|
Inductive probability
|
wikipedia
|
##fier is the length of the expression being quantified over. = = = = distribution of numbers = = = = no explicit representation of natural numbers is given. however natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. a distribution of natural numbers is implied by this, based on the complexity of constructing each number. rational numbers are constructed by the division of natural numbers. the simplest representation has no common factors between the numerator and the denominator. this allows the probability distribution of natural numbers may be extended to rational numbers. = = probability and frequency = = the probability of an event may be interpreted as the frequencies of outcomes where the statement is true divided by the total number of outcomes. if the outcomes form a continuum the frequency may need to be replaced with a measure. events are sets of outcomes. statements may be related to events. a boolean statement b about outcomes defines a set of outcomes b, b = { x : b ( x ) } { \ displaystyle b = \ { x : b ( x ) \ } } = = = conditional probability = = = each probability is always associated with the state of knowledge at a particular point in the argument. probabilities before an inference are known as prior probabilities, and probabilities after are known as posterior probabilities. probability depends on the facts known. the truth of a fact limits the domain of outcomes to the outcomes consistent with the fact. prior probabilities are the probabilities before a fact is known. posterior probabilities are after a fact is known. the posterior probabilities are said to be conditional on the fact. the probability that b { \ displaystyle b } is true given that a { \ displaystyle a } is true is written as : p ( b | a ). { \ displaystyle p ( b | a ). } all probabilities are in some sense conditional. the prior probability of b { \ displaystyle b } is, p ( b ) = p ( b | ) { \ displaystyle p ( b ) = p ( b | \ top ) } = = = the frequentist approach applied to possible worlds = = = in the frequentist approach, probabilities are defined as the ratio of the number of outcomes within an event to the total number of outcomes. in the possible world model each possible world is an outcome, and statements about possible worlds define events. the probability of a statement being true is the number of possible
|
Inductive probability
|
wikipedia
|
as the ratio of the number of outcomes within an event to the total number of outcomes. in the possible world model each possible world is an outcome, and statements about possible worlds define events. the probability of a statement being true is the number of possible worlds where the statement is true divided by the total number of possible worlds. the probability of a statement a { \ displaystyle a } being true about possible worlds is then, p ( a ) = | { x : a ( x ) } | | x : | { \ displaystyle p ( a ) = { \ frac { | \ { x : a ( x ) \ } | } { | x : \ top | } } } for a conditional probability. p ( b | a ) = | { x : a ( x ) ∧ b ( x ) } | | x : a ( x ) | { \ displaystyle p ( b | a ) = { \ frac { | \ { x : a ( x ) \ land b ( x ) \ } | } { | x : a ( x ) | } } } then p ( a ∧ b ) = | { x : a ( x ) ∧ b ( x ) } | | x : | = | { x : a ( x ) ∧ b ( x ) } | | { x : a ( x ) } | | { x : a ( x ) } | | x : | = p ( a ) p ( b | a ) { \ displaystyle { \ begin { aligned } p ( a \ land b ) & = { \ frac { | \ { x : a ( x ) \ land b ( x ) \ } | } { | x : \ top | } } \ \ [ 8pt ] & = { \ frac { | \ { x : a ( x ) \ land b ( x ) \ } | } { | \ { x : a ( x ) \ } | } } { \ frac { | \ { x : a ( x ) \ } | } { | x : \ top | } } \ \ [ 8pt ] & = p ( a ) p ( b | a ) \ end { aligned } } } using symmetry this equation may be written out as bayes'law. p ( a ∧ b ) = p ( a ) p ( b | a ) = p ( b ) p ( a | b ) { \ displaystyle p ( a \ land b ) = p ( a
|
Inductive probability
|
wikipedia
|
out as bayes'law. p ( a ∧ b ) = p ( a ) p ( b | a ) = p ( b ) p ( a | b ) { \ displaystyle p ( a \ land b ) = p ( a ) p ( b | a ) = p ( b ) p ( a | b ) } this law describes the relationship between prior and posterior probabilities when new facts are learnt. written as quantities of information bayes'theorem becomes, l ( a ∧ b ) = l ( a ) + l ( b | a ) = l ( b ) + l ( a | b ) { \ displaystyle l ( a \ land b ) = l ( a ) + l ( b | a ) = l ( b ) + l ( a | b ) } two statements a and b are said to be independent if knowing the truth of a does not change the probability of b. mathematically this is, p ( b ) = p ( b | a ) { \ displaystyle p ( b ) = p ( b | a ) } then bayes'theorem reduces to, p ( a ∧ b ) = p ( a ) p ( b ) { \ displaystyle p ( a \ land b ) = p ( a ) p ( b ) } = = = the law of total of probability = = = for a set of mutually exclusive possibilities a i { \ displaystyle a _ { i } }, the sum of the posterior probabilities must be 1. i p ( a i | b ) = 1 { \ displaystyle \ sum _ { i } { p ( a _ { i } | b ) } = 1 } substituting using bayes'theorem gives the law of total probability i p ( b | a i ) p ( a i ) = i p ( a i | b ) p ( b ) { \ displaystyle \ sum _ { i } { p ( b | a _ { i } ) p ( a _ { i } ) } = \ sum _ { i } { p ( a _ { i } | b ) p ( b ) } } p ( b ) = i p ( b | a i ) p ( a i ) { \ displaystyle p ( b ) = \ sum _ { i } { p ( b | a _ { i } ) p ( a _ { i } ) } } this result is used to give the extended form of bayes'theorem, p ( a
|
Inductive probability
|
wikipedia
|
p ( b ) = \ sum _ { i } { p ( b | a _ { i } ) p ( a _ { i } ) } } this result is used to give the extended form of bayes'theorem, p ( a i | b ) = p ( b | a i ) p ( a i ) j p ( b | a j ) p ( a j ) { \ displaystyle p ( a _ { i } | b ) = { \ frac { p ( b | a _ { i } ) p ( a _ { i } ) } { \ sum _ { j } { p ( b | a _ { j } ) p ( a _ { j } ) } } } } this is the usual form of bayes'theorem used in practice, because it guarantees the sum of all the posterior probabilities for a i { \ displaystyle a _ { i } } is 1. = = = alternate possibilities = = = for mutually exclusive possibilities, the probabilities add. p ( a ∨ b ) = p ( a ) + p ( b ), if p ( a ∧ b ) = 0 { \ displaystyle p ( a \ lor b ) = p ( a ) + p ( b ), \ qquad { \ text { if } } p ( a \ land b ) = 0 } using a ∨ b = ( a ∧ ¬ ( a ∧ b ) ) ∨ ( b ∧ ¬ ( a ∧ b ) ) ∨ ( a ∧ b ) { \ displaystyle a \ lor b = ( a \ land \ neg ( a \ land b ) ) \ lor ( b \ land \ neg ( a \ land b ) ) \ lor ( a \ land b ) } then the alternatives a ∧ ¬ ( a ∧ b ), b ∧ ¬ ( a ∧ b ), a ∧ b { \ displaystyle a \ land \ neg ( a \ land b ), \ quad b \ land \ neg ( a \ land b ), \ quad a \ land b } are all mutually exclusive. also, ( a ∧ ¬ ( a ∧ b ) ) ∨ ( a ∧ b ) = a { \ displaystyle ( a \ land \ neg ( a \ land b ) ) \ lor ( a \ land b ) = a } p ( a ∧ ¬ ( a ∧ b ) ) + p ( a ∧ b ) = p ( a ) { \ displaystyle p ( a
|
Inductive probability
|
wikipedia
|
##g ( a \ land b ) ) \ lor ( a \ land b ) = a } p ( a ∧ ¬ ( a ∧ b ) ) + p ( a ∧ b ) = p ( a ) { \ displaystyle p ( a \ land \ neg ( a \ land b ) ) + p ( a \ land b ) = p ( a ) } p ( a ∧ ¬ ( a ∧ b ) ) = p ( a ) − p ( a ∧ b ) { \ displaystyle p ( a \ land \ neg ( a \ land b ) ) = p ( a ) - p ( a \ land b ) } so, putting it all together, p ( a ∨ b ) = p ( ( a ∧ ¬ ( a ∧ b ) ) ∨ ( b ∧ ¬ ( a ∧ b ) ) ∨ ( a ∧ b ) ) = p ( a ∧ ¬ ( a ∧ b ) + p ( b ∧ ¬ ( a ∧ b ) ) + p ( a ∧ b ) = p ( a ) − p ( a ∧ b ) + p ( b ) − p ( a ∧ b ) + p ( a ∧ b ) = p ( a ) + p ( b ) − p ( a ∧ b ) { \ displaystyle { \ begin { aligned } p ( a \ lor b ) & = p ( ( a \ land \ neg ( a \ land b ) ) \ lor ( b \ land \ neg ( a \ land b ) ) \ lor ( a \ land b ) ) \ \ & = p ( a \ land \ neg ( a \ land b ) + p ( b \ land \ neg ( a \ land b ) ) + p ( a \ land b ) \ \ & = p ( a ) - p ( a \ land b ) + p ( b ) - p ( a \ land b ) + p ( a \ land b ) \ \ & = p ( a ) + p ( b ) - p ( a \ land b ) \ end { aligned } } } = = = negation = = = as, a ∨ ¬ a = { \ displaystyle a \ lor \ neg a = \ top } then p ( a ) + p ( ¬ a ) = 1 { \ displaystyle p ( a ) + p ( \ neg a ) = 1 } = = = implication and condition probability = = = implication is related to conditional probability by the following equation, a → b
|
Inductive probability
|
wikipedia
|
p ( ¬ a ) = 1 { \ displaystyle p ( a ) + p ( \ neg a ) = 1 } = = = implication and condition probability = = = implication is related to conditional probability by the following equation, a → b p ( b | a ) = 1 { \ displaystyle a \ to b \ iff p ( b | a ) = 1 } derivation, a → b p ( a → b ) = 1 p ( a ∧ b ∨ ¬ a ) = 1 p ( a ∧ b ) + p ( ¬ a ) = 1 p ( a ∧ b ) = p ( a ) p ( a ) ⋅ p ( b | a ) = p ( a ) p ( b | a ) = 1 { \ displaystyle { \ begin { aligned } a \ to b & \ iff p ( a \ to b ) = 1 \ \ & \ iff p ( a \ land b \ lor \ neg a ) = 1 \ \ & \ iff p ( a \ land b ) + p ( \ neg a ) = 1 \ \ & \ iff p ( a \ land b ) = p ( a ) \ \ & \ iff p ( a ) \ cdot p ( b | a ) = p ( a ) \ \ & \ iff p ( b | a ) = 1 \ end { aligned } } } = = bayesian hypothesis testing = = bayes'theorem may be used to estimate the probability of a hypothesis or theory h, given some facts f. the posterior probability of h is then p ( h | f ) = p ( h ) p ( f | h ) p ( f ) { \ displaystyle p ( h | f ) = { \ frac { p ( h ) p ( f | h ) } { p ( f ) } } } or in terms of information, p ( h | f ) = 2 − ( l ( h ) + l ( f | h ) − l ( f ) ) { \ displaystyle p ( h | f ) = 2 ^ { - ( l ( h ) + l ( f | h ) - l ( f ) ) } } by assuming the hypothesis is true, a simpler representation of the statement f may be given. the length of the encoding of this simpler representation is l ( f | h ). { \ displaystyle l ( f | h ). } l ( h ) + l ( f | h
|
Inductive probability
|
wikipedia
|
a simpler representation of the statement f may be given. the length of the encoding of this simpler representation is l ( f | h ). { \ displaystyle l ( f | h ). } l ( h ) + l ( f | h ) { \ displaystyle l ( h ) + l ( f | h ) } represents the amount of information needed to represent the facts f, if h is true. l ( f ) { \ displaystyle l ( f ) } is the amount of information needed to represent f without the hypothesis h. the difference is how much the representation of the facts has been compressed by assuming that h is true. this is the evidence that the hypothesis h is true. if l ( f ) { \ displaystyle l ( f ) } is estimated from encoding length then the probability obtained will not be between 0 and 1. the value obtained is proportional to the probability, without being a good probability estimate. the number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory. if a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probability p ( f ) { \ displaystyle p ( f ) }. = = = set of hypothesis = = = probabilities may be calculated from the extended form of bayes'theorem. given all mutually exclusive hypothesis h i { \ displaystyle h _ { i } } which give evidence, such that, l ( h i ) + l ( f | h i ) < l ( f ) { \ displaystyle l ( h _ { i } ) + l ( f | h _ { i } ) < l ( f ) } and also the hypothesis r, that none of the hypothesis is true, then, p ( h i | f ) = p ( h i ) p ( f | h i ) p ( f | r ) + j p ( h j ) p ( f | h j ) p ( r | f ) = p ( f | r ) p ( f | r ) + j p ( h j ) p ( f | h j ) { \ displaystyle { \ begin { aligned } p ( h _ { i } | f ) & = { \ frac { p ( h _ { i } ) p ( f | h _ { i } ) } { p ( f | r ) + \ sum _ { j } { p ( h _ { j } ) p ( f
|
Inductive probability
|
wikipedia
|
{ \ frac { p ( h _ { i } ) p ( f | h _ { i } ) } { p ( f | r ) + \ sum _ { j } { p ( h _ { j } ) p ( f | h _ { j } ) } } } \ \ [ 8pt ] p ( r | f ) & = { \ frac { p ( f | r ) } { p ( f | r ) + \ sum _ { j } { p ( h _ { j } ) p ( f | h _ { j } ) } } } \ end { aligned } } } in terms of information, p ( h i | f ) = 2 − ( l ( h i ) + l ( f | h i ) ) 2 − l ( f | r ) + j 2 − ( l ( h j ) + l ( f | h j ) ) p ( r | f ) = 2 − l ( f | r ) 2 − l ( f | r ) + j 2 − ( l ( h j ) + l ( f | h j ) ) { \ displaystyle { \ begin { aligned } p ( h _ { i } | f ) & = { \ frac { 2 ^ { - ( l ( h _ { i } ) + l ( f | h _ { i } ) ) } } { 2 ^ { - l ( f | r ) } + \ sum _ { j } 2 ^ { - ( l ( h _ { j } ) + l ( f | h _ { j } ) ) } } } \ \ [ 8pt ] p ( r | f ) & = { \ frac { 2 ^ { - l ( f | r ) } } { 2 ^ { - l ( f | r ) } + \ sum _ { j } { 2 ^ { - ( l ( h _ { j } ) + l ( f | h _ { j } ) ) } } } } \ end { aligned } } } in most situations it is a good approximation to assume that f { \ displaystyle f } is independent of r { \ displaystyle r }, which means p ( f | r ) = p ( f ) { \ displaystyle p ( f | r ) = p ( f ) } giving, p ( h i | f ) ≈ 2 − ( l ( h i ) + l ( f | h i ) ) 2 − l ( f ) + j
|
Inductive probability
|
wikipedia
|
displaystyle p ( f | r ) = p ( f ) } giving, p ( h i | f ) ≈ 2 − ( l ( h i ) + l ( f | h i ) ) 2 − l ( f ) + j 2 − ( l ( h j ) + l ( f | h j ) ) p ( r | f ) ≈ 2 − l ( f ) 2 − l ( f ) + j 2 − ( l ( h j ) + l ( f | h j ) ) { \ displaystyle { \ begin { aligned } p ( h _ { i } | f ) & \ approx { \ frac { 2 ^ { - ( l ( h _ { i } ) + l ( f | h _ { i } ) ) } } { 2 ^ { - l ( f ) } + \ sum _ { j } { 2 ^ { - ( l ( h _ { j } ) + l ( f | h _ { j } ) ) } } } } \ \ [ 8pt ] p ( r | f ) & \ approx { \ frac { 2 ^ { - l ( f ) } } { 2 ^ { - l ( f ) } + \ sum _ { j } { 2 ^ { - ( l ( h _ { j } ) + l ( f | h _ { j } ) ) } } } } \ end { aligned } } } = = boolean inductive inference = = abductive inference starts with a set of facts f which is a statement ( boolean expression ). abductive reasoning is of the form, a theory t implies the statement f. as the theory t is simpler than f, abduction says that there is a probability that the theory t is implied by f. the theory t, also called an explanation of the condition f, is an answer to the ubiquitous factual " why " question. for example, for the condition f is " why do apples fall? ". the answer is a theory t that implies that apples fall ; f = g m 1 m 2 r 2 { \ displaystyle f = g { \ frac { m _ { 1 } m _ { 2 } } { r ^ { 2 } } } } inductive inference is of the form, all observed objects in a class c have a property p. therefore there is a probability that all objects in a class c have a property p. in terms of abductive inference, all objects in a class c or
|
Inductive probability
|
wikipedia
|
inference is of the form, all observed objects in a class c have a property p. therefore there is a probability that all objects in a class c have a property p. in terms of abductive inference, all objects in a class c or set have a property p is a theory that implies the observed condition, all observed objects in a class c have a property p. so inductive inference is a general case of abductive inference. in common usage the term inductive inference is often used to refer to both abductive and inductive inference. = = = generalization and specialization = = = inductive inference is related to generalization. generalizations may be formed from statements by replacing a specific value with membership of a category, or by replacing membership of a category with membership of a broader category. in deductive logic, generalization is a powerful method of generating new theories that may be true. in inductive inference generalization generates theories that have a probability of being true. the opposite of generalization is specialization. specialization is used in applying a general rule to a specific case. specializations are created from generalizations by replacing membership of a category by a specific value, or by replacing a category with a sub category. the linnaen classification of living things and objects forms the basis for generalization and specification. the ability to identify, recognize and classify is the basis for generalization. perceiving the world as a collection of objects appears to be a key aspect of human intelligence. it is the object oriented model, in the non computer science sense. the object oriented model is constructed from our perception. in particularly vision is based on the ability to compare two images and calculate how much information is needed to morph or map one image into another. computer vision uses this mapping to construct 3d images from stereo image pairs. inductive logic programming is a means of constructing theory that implies a condition. plotkin's " relative least general generalization ( rlgg ) " approach constructs the simplest generalization consistent with the condition. = = = newton's use of induction = = = isaac newton used inductive arguments in constructing his law of universal gravitation. starting with the statement, the center of an apple falls towards the center of the earth. generalizing by replacing apple for object, and earth for object gives, in a two body system, the center of an object falls towards the center of another object. the theory explains all objects falling, so there is strong evidence for
|
Inductive probability
|
wikipedia
|
of the earth. generalizing by replacing apple for object, and earth for object gives, in a two body system, the center of an object falls towards the center of another object. the theory explains all objects falling, so there is strong evidence for it. the second observation, the planets appear to follow an elliptical path. after some complicated mathematical calculus, it can be seen that if the acceleration follows the inverse square law then objects will follow an ellipse. so induction gives evidence for the inverse square law. using galileo's observation that all objects drop with the same speed, f 1 = m 1 a 1 = m 1 k 1 r 2 i 1 { \ displaystyle f _ { 1 } = m _ { 1 } a _ { 1 } = { \ frac { m _ { 1 } k _ { 1 } } { r ^ { 2 } } } i _ { 1 } } f 2 = m 2 a 2 = m 2 k 2 r 2 i 2 { \ displaystyle f _ { 2 } = m _ { 2 } a _ { 2 } = { \ frac { m _ { 2 } k _ { 2 } } { r ^ { 2 } } } i _ { 2 } } where i 1 { \ displaystyle i _ { 1 } } and i 2 { \ displaystyle i _ { 2 } } vectors towards the center of the other object. then using newton's third law f 1 = − f 2 { \ displaystyle f _ { 1 } = - f _ { 2 } } f = g m 1 m 2 r 2 { \ displaystyle f = g { \ frac { m _ { 1 } m _ { 2 } } { r ^ { 2 } } } } = = = probabilities for inductive inference = = = implication determines condition probability as, t → f p ( f | t ) = 1 { \ displaystyle t \ to f \ iff p ( f | t ) = 1 } so, p ( f | t ) = 1 { \ displaystyle p ( f | t ) = 1 } l ( f | t ) = 0 { \ displaystyle l ( f | t ) = 0 } this result may be used in the probabilities given for bayesian hypothesis testing. for a single theory, h = t and, p ( t | f ) = p ( t ) p ( f ) { \ displaystyle p ( t | f ) = { \ frac { p
|
Inductive probability
|
wikipedia
|
##bilities given for bayesian hypothesis testing. for a single theory, h = t and, p ( t | f ) = p ( t ) p ( f ) { \ displaystyle p ( t | f ) = { \ frac { p ( t ) } { p ( f ) } } } or in terms of information, the relative probability is, p ( t | f ) = 2 − ( l ( t ) − l ( f ) ) { \ displaystyle p ( t | f ) = 2 ^ { - ( l ( t ) - l ( f ) ) } } note that this estimate for p ( t | f ) is not a true probability. if l ( t i ) < l ( f ) { \ displaystyle l ( t _ { i } ) < l ( f ) } then the theory has evidence to support it. then for a set of theories t i = h i { \ displaystyle t _ { i } = h _ { i } }, such that l ( t i ) < l ( f ) { \ displaystyle l ( t _ { i } ) < l ( f ) }, p ( t i | f ) = p ( t i ) p ( f | r ) + j p ( t j ) { \ displaystyle p ( t _ { i } | f ) = { \ frac { p ( t _ { i } ) } { p ( f | r ) + \ sum _ { j } { p ( t _ { j } ) } } } } p ( r | f ) = p ( f | r ) p ( f | r ) + j p ( t j ) { \ displaystyle p ( r | f ) = { \ frac { p ( f | r ) } { p ( f | r ) + \ sum _ { j } { p ( t _ { j } ) } } } } giving, p ( t i | f ) ≈ 2 − l ( t i ) 2 − l ( f ) + j 2 − l ( t j ) { \ displaystyle p ( t _ { i } | f ) \ approx { \ frac { 2 ^ { - l ( t _ { i } ) } } { 2 ^ { - l ( f ) } + \ sum _ { j } { 2 ^ { - l ( t _ { j } ) } } } } } p ( r | f ) ≈ 2 − l ( f ) 2
|
Inductive probability
|
wikipedia
|
} { 2 ^ { - l ( f ) } + \ sum _ { j } { 2 ^ { - l ( t _ { j } ) } } } } } p ( r | f ) ≈ 2 − l ( f ) 2 − l ( f ) + j 2 − l ( t j ) { \ displaystyle p ( r | f ) \ approx { \ frac { 2 ^ { - l ( f ) } } { 2 ^ { - l ( f ) } + \ sum _ { j } { 2 ^ { - l ( t _ { j } ) } } } } } = = derivations = = = = = derivation of inductive probability = = = make a list of all the shortest programs k i { \ displaystyle k _ { i } } that each produce a distinct infinite string of bits, and satisfy the relation, t n ( r ( k i ) ) = x { \ displaystyle t _ { n } ( r ( k _ { i } ) ) = x } where r ( k i ) { \ displaystyle r ( k _ { i } ) } is the result of running the program k i { \ displaystyle k _ { i } } and t n { \ displaystyle t _ { n } } truncates the string after n bits. the problem is to calculate the probability that the source is produced by program k i, { \ displaystyle k _ { i }, } given that the truncated source after n bits is x. this is represented by the conditional probability, p ( s = r ( k i ) | t n ( s ) = x ) { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n } ( s ) = x ) } using the extended form of bayes'theorem p ( s = r ( k i ) | t n ( s ) = x ) = p ( t n ( s ) = x | s = r ( k i ) ) p ( s = r ( k i ) ) j p ( t n ( s ) = x | s = r ( k j ) ) p ( s = r ( k j ) ). { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n } ( s ) = x ) = { \ frac { p ( t _ { n } ( s ) = x | s = r ( k _ { i } )
|
Inductive probability
|
wikipedia
|
= r ( k _ { i } ) | t _ { n } ( s ) = x ) = { \ frac { p ( t _ { n } ( s ) = x | s = r ( k _ { i } ) ) p ( s = r ( k _ { i } ) ) } { \ sum _ { j } p ( t _ { n } ( s ) = x | s = r ( k _ { j } ) ) p ( s = r ( k _ { j } ) ) } }. } the extended form relies on the law of total probability. this means that the s = r ( k i ) { \ displaystyle s = r ( k _ { i } ) } must be distinct possibilities, which is given by the condition that each k i { \ displaystyle k _ { i } } produce a different infinite string. also one of the conditions s = r ( k i ) { \ displaystyle s = r ( k _ { i } ) } must be true. this must be true, as in the limit as n → ∞, { \ displaystyle n \ to \ infty, } there is always at least one program that produces t n ( s ) { \ displaystyle t _ { n } ( s ) }. as k i { \ displaystyle k _ { i } } are chosen so that t n ( r ( k i ) ) = x, { \ displaystyle t _ { n } ( r ( k _ { i } ) ) = x, } then, p ( t n ( s ) = x | s = r ( k i ) ) = 1 { \ displaystyle p ( t _ { n } ( s ) = x | s = r ( k _ { i } ) ) = 1 } the apriori probability of the string being produced from the program, given no information about the string, is based on the size of the program, p ( s = r ( k i ) ) = 2 − i ( k i ) { \ displaystyle p ( s = r ( k _ { i } ) ) = 2 ^ { - i ( k _ { i } ) } } giving, p ( s = r ( k i ) | t n ( s ) = x ) = 2 − i ( k i ) j 2 − i ( k j ). { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n }
|
Inductive probability
|
wikipedia
|
) | t n ( s ) = x ) = 2 − i ( k i ) j 2 − i ( k j ). { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n } ( s ) = x ) = { \ frac { 2 ^ { - i ( k _ { i } ) } } { \ sum _ { j } 2 ^ { - i ( k _ { j } ) } } }. } programs that are the same or longer than the length of x provide no predictive power. separate them out giving, p ( s = r ( k i ) | t n ( s ) = x ) = 2 − i ( k i ) j : i ( k j ) < n 2 − i ( k j ) + j : i ( k j ) n 2 − i ( k j ). { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n } ( s ) = x ) = { \ frac { 2 ^ { - i ( k _ { i } ) } } { \ sum _ { j : i ( k _ { j } ) < n } 2 ^ { - i ( k _ { j } ) } + \ sum _ { j : i ( k _ { j } ) \ geqslant n } 2 ^ { - i ( k _ { j } ) } } }. } then identify the two probabilities as, p ( x has pattern ) = j : i ( k j ) < n 2 − i ( k j ) { \ displaystyle p ( x { \ text { has pattern } } ) = \ sum _ { j : i ( k _ { j } ) < n } 2 ^ { - i ( k _ { j } ) } } p ( x is random ) = j : i ( k j ) n 2 − i ( k j ) { \ displaystyle p ( x { \ text { is random } } ) = \ sum _ { j : i ( k _ { j } ) \ geqslant n } 2 ^ { - i ( k _ { j } ) } } but the prior probability that x is a random set of bits is 2 − n { \ displaystyle 2 ^ { - n } }. so, p ( s = r ( k i ) | t n ( s ) = x ) = 2 − i
|
Inductive probability
|
wikipedia
|
prior probability that x is a random set of bits is 2 − n { \ displaystyle 2 ^ { - n } }. so, p ( s = r ( k i ) | t n ( s ) = x ) = 2 − i ( k i ) 2 − n + j : i ( k j ) < n 2 − i ( k j ). { \ displaystyle p ( s = r ( k _ { i } ) | t _ { n } ( s ) = x ) = { \ frac { 2 ^ { - i ( k _ { i } ) } } { 2 ^ { - n } + \ sum _ { j : i ( k _ { j } ) < n } 2 ^ { - i ( k _ { j } ) } } }. } the probability that the source is random, or unpredictable is, p ( random ( s ) | t n ( s ) = x ) = 2 − n 2 − n + j : i ( k j ) < n 2 − i ( k j ). { \ displaystyle p ( \ operatorname { random } ( s ) | t _ { n } ( s ) = x ) = { \ frac { 2 ^ { - n } } { 2 ^ { - n } + \ sum _ { j : i ( k _ { j } ) < n } 2 ^ { - i ( k _ { j } ) } } }. } = = = a model for inductive inference = = = a model of how worlds are constructed is used in determining the probabilities of theories, a random bit string is selected. a condition is constructed from the bit string. a world is constructed that is consistent with the condition. if w is the bit string then the world is created such that r ( w ) { \ displaystyle r ( w ) } is true. an intelligent agent has some facts about the word, represented by the bit string c, which gives the condition, c = r ( c ) { \ displaystyle c = r ( c ) } the set of bit strings identical with any condition x is e ( x ) { \ displaystyle e ( x ) }. x, e ( x ) = { w : r ( w ) ≡ x } { \ displaystyle \ forall x, e ( x ) = \ { w : r ( w ) \ equiv x \ } } a theory is a simpler condition that explains ( or implies )
|
Inductive probability
|
wikipedia
|
: r ( w ) ≡ x } { \ displaystyle \ forall x, e ( x ) = \ { w : r ( w ) \ equiv x \ } } a theory is a simpler condition that explains ( or implies ) c. the set of all such theories is called t, t ( c ) = { t : t → c } { \ displaystyle t ( c ) = \ { t : t \ to c \ } } = = = = applying bayes'theorem = = = = extended form of bayes'theorem may be applied p ( a i | b ) = p ( b | a i ) p ( a i ) j p ( b | a j ) p ( a j ), { \ displaystyle p ( a _ { i } | b ) = { \ frac { p ( b | a _ { i } ) \, p ( a _ { i } ) } { \ sum _ { j } p ( b | a _ { j } ) \, p ( a _ { j } ) } }, } where, b = e ( c ) { \ displaystyle b = e ( c ) } a i = e ( t ) { \ displaystyle a _ { i } = e ( t ) } to apply bayes'theorem the following must hold : a i { \ displaystyle a _ { i } } is a partition of the event space. for t ( c ) { \ displaystyle t ( c ) } to be a partition, no bit string n may belong to two theories. to prove this assume they can and derive a contradiction, ( n ∈ t ) ∧ ( n ∈ m ) ∧ ( n = m ) ∧ ( n ∈ e ( n ) ∧ n ∈ e ( m ) ) { \ displaystyle ( n \ in t ) \ land ( n \ in m ) \ land ( n \ neq m ) \ land ( n \ in e ( n ) \ land n \ in e ( m ) ) } ( n = m ) ∧ r ( n ) ≡ n ∧ r ( n ) ≡ m { \ displaystyle \ implies ( n \ neq m ) \ land r ( n ) \ equiv n \ land r ( n ) \ equiv m } { \ displaystyle \ implies \ bot } secondly prove that t includes all outcomes consistent with the condition. as all theories consistent with c are included then r ( w ) { \ displays
|
Inductive probability
|
wikipedia
|
land r ( n ) \ equiv m } { \ displaystyle \ implies \ bot } secondly prove that t includes all outcomes consistent with the condition. as all theories consistent with c are included then r ( w ) { \ displaystyle r ( w ) } must be in this set. so bayes theorem may be applied as specified giving, t ∈ t ( c ), p ( e ( t ) | e ( c ) ) = p ( e ( t ) ) ⋅ p ( e ( c ) | e ( t ) ) j ∈ t ( c ) p ( e ( j ) ) ⋅ p ( e ( c ) | e ( j ) ) { \ displaystyle \ forall t \ in t ( c ), p ( e ( t ) | e ( c ) ) = { \ frac { p ( e ( t ) ) \ cdot p ( e ( c ) | e ( t ) ) } { \ sum _ { j \ in t ( c ) } p ( e ( j ) ) \ cdot p ( e ( c ) | e ( j ) ) } } } using the implication and condition probability law, the definition of t ( c ) { \ displaystyle t ( c ) } implies, t ∈ t ( c ), p ( e ( c ) | e ( t ) ) = 1 { \ displaystyle \ forall t \ in t ( c ), p ( e ( c ) | e ( t ) ) = 1 } the probability of each theory in t is given by, t ∈ t ( c ), p ( e ( t ) ) = n : r ( n ) ≡ t 2 − l ( n ) { \ displaystyle \ forall t \ in t ( c ), p ( e ( t ) ) = \ sum _ { n : r ( n ) \ equiv t } 2 ^ { - l ( n ) } } so, t ∈ t ( c ), p ( e ( t ) | e ( c ) ) = n : r ( n ) ≡ t 2 − l ( n ) j ∈ t ( c ) m : r ( m ) ≡ j 2 − l ( m ) { \ displaystyle \ forall t \ in t ( c ), p ( e ( t ) | e ( c ) ) = { \ frac { \ sum _ { n : r ( n ) \ equiv t
|
Inductive probability
|
wikipedia
|
m ) { \ displaystyle \ forall t \ in t ( c ), p ( e ( t ) | e ( c ) ) = { \ frac { \ sum _ { n : r ( n ) \ equiv t } 2 ^ { - l ( n ) } } { \ sum _ { j \ in t ( c ) } \ sum _ { m : r ( m ) \ equiv j } 2 ^ { - l ( m ) } } } } finally the probabilities of the events may be identified with the probabilities of the condition which the outcomes in the event satisfy, t ∈ t ( c ), p ( e ( t ) | e ( c ) ) = p ( t | c ) { \ displaystyle \ forall t \ in t ( c ), p ( e ( t ) | e ( c ) ) = p ( t | c ) } giving t ∈ t ( c ), p ( t | c ) = n : r ( n ) ≡ t 2 − l ( n ) j ∈ t ( c ) m : r ( m ) ≡ j 2 − l ( m ) { \ displaystyle \ forall t \ in t ( c ), p ( t | c ) = { \ frac { \ sum _ { n : r ( n ) \ equiv t } 2 ^ { - l ( n ) } } { \ sum _ { j \ in t ( c ) } \ sum _ { m : r ( m ) \ equiv j } 2 ^ { - l ( m ) } } } } this is the probability of the theory t after observing that the condition c holds. = = = = removing theories without predictive power = = = = theories that are less probable than the condition c have no predictive power. separate them out giving, t ∈ t ( c ), p ( t | c ) = p ( e ( t ) ) ( j : j ∈ t ( c ) ∧ p ( e ( j ) ) > p ( e ( c ) ) p ( e ( j ) ) ) + ( j : j ∈ t ( c ) ∧ p ( e ( j ) ) ≤ p ( e ( c ) ) p ( j ) ) { \ displaystyle \ forall t \ in t ( c ), p ( t | c ) = { \ frac { p ( e ( t ) ) } {
|
Inductive probability
|
wikipedia
|
) ≤ p ( e ( c ) ) p ( j ) ) { \ displaystyle \ forall t \ in t ( c ), p ( t | c ) = { \ frac { p ( e ( t ) ) } { ( \ sum _ { j : j \ in t ( c ) \ land p ( e ( j ) ) > p ( e ( c ) ) } p ( e ( j ) ) ) + ( \ sum _ { j : j \ in t ( c ) \ land p ( e ( j ) ) \ leq p ( e ( c ) ) } p ( j ) ) } } } the probability of the theories without predictive power on c is the same as the probability of c. so, p ( e ( c ) ) = j : j ∈ t ( c ) ∧ p ( e ( j ) ) ≤ p ( e ( c ) ) p ( j ) { \ displaystyle p ( e ( c ) ) = \ sum _ { j : j \ in t ( c ) \ land p ( e ( j ) ) \ leq p ( e ( c ) ) } p ( j ) } so the probability t ∈ t ( c ), p ( t | c ) = p ( e ( t ) ) p ( e ( c ) ) + j : j ∈ t ( c ) ∧ p ( e ( j ) ) > p ( e ( c ) ) p ( e ( j ) ) { \ displaystyle \ forall t \ in t ( c ), p ( t | c ) = { \ frac { p ( e ( t ) ) } { p ( e ( c ) ) + \ sum _ { j : j \ in t ( c ) \ land p ( e ( j ) ) > p ( e ( c ) ) } p ( e ( j ) ) } } } and the probability of no prediction for c, written as random ( c ) { \ displaystyle \ operatorname { random } ( c ) }, p ( random ( c ) | c ) = p ( e ( c ) ) p ( e ( c ) ) + j : j ∈ t ( c ) ∧ p ( e ( j ) ) > p ( e ( c ) ) p ( e ( j ) ) { \ displaystyle p ( { \ text { random } } ( c ) | c ) = { \ frac { p ( e ( c ) ) }
|
Inductive probability
|
wikipedia
|
) ) > p ( e ( c ) ) p ( e ( j ) ) { \ displaystyle p ( { \ text { random } } ( c ) | c ) = { \ frac { p ( e ( c ) ) } { p ( e ( c ) ) + \ sum _ { j : j \ in t ( c ) \ land p ( e ( j ) ) > p ( e ( c ) ) } p ( e ( j ) ) } } } the probability of a condition was given as, t, p ( e ( t ) ) = n : r ( n ) ≡ t 2 − l ( n ) { \ displaystyle \ forall t, p ( e ( t ) ) = \ sum _ { n : r ( n ) \ equiv t } 2 ^ { - l ( n ) } } bit strings for theories that are more complex than the bit string given to the agent as input have no predictive power. there probabilities are better included in the random case. to implement this a new definition is given as f in, t, p ( f ( t, c ) ) = n : r ( n ) ≡ t ∧ l ( n ) < l ( c ) 2 − l ( n ) { \ displaystyle \ forall t, p ( f ( t, c ) ) = \ sum _ { n : r ( n ) \ equiv t \ land l ( n ) < l ( c ) } 2 ^ { - l ( n ) } } using f, an improved version of the abductive probabilities is, t ∈ t ( c ), p ( t | c ) = p ( f ( t, c ) ) p ( f ( c, c ) ) + j : j ∈ t ( c ) ∧ p ( f ( j, c ) ) > p ( f ( c, c ) ) p ( e ( j, c ) ) { \ displaystyle \ forall t \ in t ( c ), p ( t | c ) = { \ frac { p ( f ( t, c ) ) } { p ( f ( c, c ) ) + \ sum _ { j : j \ in t ( c ) \ land p ( f ( j, c ) ) > p ( f ( c, c ) ) } p ( e ( j, c ) ) } } } p ( random ( c ) | c )
|
Inductive probability
|
wikipedia
|
j \ in t ( c ) \ land p ( f ( j, c ) ) > p ( f ( c, c ) ) } p ( e ( j, c ) ) } } } p ( random ( c ) | c ) = p ( f ( c, c ) ) p ( f ( c, c ) ) + j : j ∈ t ( c ) ∧ p ( f ( j, c ) ) > p ( f ( c, c ) ) p ( f ( j, c ) ) { \ displaystyle p ( \ operatorname { random } ( c ) | c ) = { \ frac { p ( f ( c, c ) ) } { p ( f ( c, c ) ) + \ sum _ { j : j \ in t ( c ) \ land p ( f ( j, c ) ) > p ( f ( c, c ) ) } p ( f ( j, c ) ) } } } = = key people = = william of ockham thomas bayes ray solomonoff andrey kolmogorov chris wallace d. m. boulton jorma rissanen marcus hutter = = see also = = abductive reasoning algorithmic probability algorithmic information theory bayesian inference information theory inductive inference inductive logic programming inductive reasoning learning minimum message length minimum description length occam's razor solomonoff's theory of inductive inference universal artificial intelligence = = references = = = = external links = = rathmanner, s and hutter, m., " a philosophical treatise of universal induction " in entropy 2011, 13, 1076 – 1136 : a very clear philosophical and mathematical analysis of solomonoff's theory of inductive inference. c. s. wallace, statistical and inductive inference by minimum message length, springer - verlag ( information science and statistics ), isbn 0 - 387 - 23795 - x, may 2005 – chapter headings, table of contents and sample pages.
|
Inductive probability
|
wikipedia
|
direct market access ( dma ) in financial markets is the electronic trading infrastructure that gives investors wishing to trade in financial instruments a way to interact with the order book of an exchange. normally, trading on the order book is restricted to broker - dealers and market making firms that are members of the exchange. using dma, investment companies ( also known as buy side firms ) and other private traders use the information technology infrastructure of sell side firms such as investment banks and the market access that those firms possess, but control the way a trading transaction is managed themselves rather than passing the order over to the broker's own in - house traders for execution. today, dma is often combined with algorithmic trading giving access to many different trading strategies. certain forms of dma, most notably " sponsored access ", have raised substantial regulatory concerns because of the possibility of a malfunction by an investor to cause widespread market disruption. = = history = = as financial markets moved on from traditional open outcry trading on exchange trading floors towards decentralized electronic, screen - based trading and information technology improved, the opportunity for investors and other buy side traders to trade for themselves rather than handing orders over to brokers for execution began to emerge. the implementation of the fix protocol gave market participants the ability to route orders electronically to execution desks. advances in the technology enabled more detailed instructions to be submitted electronically with the underlying order. the logical conclusion to this, enabling investors to work their own orders directly on the order book without recourse to market makers, was first facilitated by electronic communication networks such as instinet. recognising the threat to their own businesses, investment banks began acquiring these companies ( e. g. the purchase of instinet in 2007 by nomura holdings ) and developing their own dma technologies. most major sell - side brokers now provide dma services to their clients alongside their traditional'worked'orders and algorithmic trading solutions giving access to many different trading strategies. = = benefits = = there are several motivations for why a trader may choose to use dma rather than alternative forms of order placement : dma usually offers lower transaction costs because only the technology is being paid for and not the usual order management and oversight responsibilities that come with an order passed to a broker for execution. orders are handled directly by the originator giving them more control over the final execution and the ability to exploit liquidity and price opportunities more quickly. information leakage is minimised because the trading is done anonymously using the d
|
Direct market access
|
wikipedia
|
broker for execution. orders are handled directly by the originator giving them more control over the final execution and the ability to exploit liquidity and price opportunities more quickly. information leakage is minimised because the trading is done anonymously using the dma provider's identity as a cover. dma systems are also generally shielded from other trading desks within the provider's organisation by a chinese wall. direct market access allows a user to'trade the spread'of a stock. this is facilitated by the permission of entering your order onto the'level 2'order book, effectively negating the need to pass through a broker or dealer. = = ultra - low latency direct market access ( ulldma ) = = advanced trading platforms and market gateways are essential to the practice of high - frequency trading. order flow can be routed directly to the line handler where it undergoes a strict set of risk filters before hitting the execution venue ( s ). typically, ulldma systems built specifically for hft can currently handle high amounts of volume and incur no delay greater than 500 microseconds. one area in which low - latency systems can contribute to best execution is with functionality such as direct strategy access ( dsa ) and smart order router. = = sponsored access = = following the flash crash, it has become difficult for a trading participant to get a true form of direct market access in a sponsored access arrangement with a broker. this owes to changes to the net capital rule, rule 15c3 - 1, that the us securities and exchange commission adopted in july 2013, which amended the regulatory capital requirements for us - regulated broker - dealers and required sponsored access trades to go through the sponsoring broker's pre - trade risk layer. = = foreign exchange direct market access = = foreign exchange direct market access ( fx dma ) refers to electronic facilities that match foreign exchange orders from individual investors, buy - side or sell - side firms with each other. fx dma infrastructures, provided by independent fx agency desks or exchanges, consist of a front - end, api or fix trading interfaces that disseminate order and available quantity data from all participants and enables buy - side traders, both institutions in the interbank market and individuals trading retail forex in a low latency environment. other defining criteria of fx dma : trades are matched solely on a price / time protocol. there are no re - quotes. platforms display the full range ( 0 - 9 ) of one - tenth pip or percentage in point consistent with
|
Direct market access
|
wikipedia
|
##ncy environment. other defining criteria of fx dma : trades are matched solely on a price / time protocol. there are no re - quotes. platforms display the full range ( 0 - 9 ) of one - tenth pip or percentage in point consistent with professional fx market quotation protocols not half - pip pricing ( 0 or 5 ). anonymous platforms ensure neutral prices reflecting global fx market conditions, not a dealer's knowledge or familiarity with a client's trading methods, strategies, tactics or current position ( s ). enhanced control of trade execution by providing live, executable price and quantity data enabling a trader to see exactly at what price they can trade for the full amount of a transaction. orders are facilitated by agency brokers. the broker is not a market maker or liquidity destination on the dma platform it provides to clients. f market structures show variable spreads related to interbank market conditions, including volatility, pending or recently released news, as well as market maker trading flows. by definition, fx dma market structures cannot show fixed spreads, which are indicative of dealer platforms. fees are either a fixed markup into the client's dealing price and / or a commission. = = see also = = market maker direct access trading straight - through processing electronic communication network = = references = =
|
Direct market access
|
wikipedia
|
in computer science, bootstrapping is the technique for producing a self - compiling compiler – that is, a compiler ( or assembler ) written in the source programming language that it intends to compile. an initial core version of the compiler ( the bootstrap compiler ) is generated in a different language ( which could be assembly language ) ; successive expanded versions of the compiler are developed using this minimal subset of the language. the problem of compiling a self - compiling compiler has been called the chicken - or - egg problem in compiler design, and bootstrapping is a solution to this problem. bootstrapping is a fairly common practice when creating a programming language. many compilers for many programming languages are bootstrapped, including compilers for algol, basic, c, common lisp, d, eiffel, elixir, go, haskell, java, modula - 2, nim, oberon, ocaml, pascal, pl / i, python, rust, scala, scheme, typescript, vala, zig and more. = = process = = a typical bootstrap process works in three or four stages : stage 0 : preparing an environment for the bootstrap compiler to work with. this is where the source language and output language of the bootstrap compiler are chosen. in the case of a " bare machine " ( one which has no compiler for any language ) the source and output are written as binary machine code, or may be created by cross compiling on some other machine than the target. otherwise, the bootstrap compiler is to be written in one of the programming languages which does exist on the target machine, and that compiler will generate something which can execute on the target, including a high - level programming language, an assembly language, an object file, or even machine code. stage 1 : the bootstrap compiler is produced. this compiler is enough to translate its own source into a program which can be executed on the target machine. at this point, all further development is done using the language defined by the bootstrap compiler, and stage 2 begins. stage 2 : a full compiler is produced by the bootstrap compiler. this is typically done in stages as needed, e. g. compiler for version x of the language will be able to compile features from version x + 1, but that compiler does not actually use those features. once this compiler has been tested and can compile itself, now version x + 1 features may be used by subsequent
|
Bootstrapping (compilers)
|
wikipedia
|
x of the language will be able to compile features from version x + 1, but that compiler does not actually use those features. once this compiler has been tested and can compile itself, now version x + 1 features may be used by subsequent releases of the compiler. stage 3 : a full compiler is produced by the stage 2 full compiler. if more features are to be added, work resumes at stage 2, with the current stage 3 full compiler replacing the bootstrap compiler. the full compiler is built twice in order to compare the outputs of the two stages. if they are different, either the bootstrap or the full compiler contains a bug. = = advantages = = bootstrapping a compiler has the following advantages : it is a non - trivial test of the language being compiled, and as such is a form of dogfooding. compiler developers and bug reporters only need to know the language being compiled. compiler development can be performed in the higher - level language being compiled. improvements to the compiler's back - end improve not only general - purpose programs but also the compiler itself. it is a comprehensive consistency check as it should be able to reproduce its own object code. note that some of these points assume that the language runtime is also written in the same language. = = methods = = if one needs to compile a compiler for language x written in language x, there is the issue of how the first compiler can be compiled. the different methods that are used in practice include : implementing an interpreter or compiler for language x in language y. niklaus wirth reported that he wrote the first pascal compiler in fortran. another interpreter or compiler for x has already been written in another language y ; this is how scheme is often bootstrapped. earlier versions of the compiler were written in a subset of x for which there existed some other compiler ; this is how some supersets of java, haskell, and the initial free pascal compiler are bootstrapped. a compiler supporting non - standard language extensions or optional language features can be written without using those extensions and features, to enable it being compiled with another compiler supporting the same base language but a different set of extensions and features. the main parts of the c + + compiler clang were written in a subset of c + + that can be compiled by both g + + and microsoft visual c + +. advanced features are written with some gcc extensions. the compiler for x is cross compiled from another architecture where there exists a compiler for x ; this
|
Bootstrapping (compilers)
|
wikipedia
|
subset of c + + that can be compiled by both g + + and microsoft visual c + +. advanced features are written with some gcc extensions. the compiler for x is cross compiled from another architecture where there exists a compiler for x ; this is how compilers for c are usually ported to other platforms. also this is the method used for free pascal after the initial bootstrap. writing the compiler in x ; then hand - compiling it from source ( most likely in a non - optimized way ) and running that on the code to get an optimized compiler. donald knuth used this for his web literate programming system. methods for distributing compilers in source code include providing a portable bytecode version of the compiler, so as to bootstrap the process of compiling the compiler with itself. the t - diagram is a notation used to explain these compiler bootstrap techniques. in some cases, the most convenient way to get a complicated compiler running on a system that has little or no software on it involves a series of ever more sophisticated assemblers and compilers. = = history = = assemblers were the first language tools to bootstrap themselves. the first high - level language to provide such a bootstrap was neliac in 1958. the first widely used languages to do so were burroughs b5000 algol in 1961 and lisp in 1962. hart and levin wrote a lisp compiler in lisp at mit in 1962, testing it inside an existing lisp interpreter. once they had improved the compiler to the point where it could compile its own source code, it was self - hosting. the compiler as it exists on the standard compiler tape is a machine language program that was obtained by having the s - expression definition of the compiler work on itself through the interpreter. this technique is only possible when an interpreter already exists for the very same language that is to be compiled. it borrows directly from the notion of running a program on itself as input, which is also used in various proofs in theoretical computer science, such as the variation of the proof that the halting problem is undecidable that uses rice's theorem. = = current efforts = = due to security concerns regarding the trusting trust attack ( which involves a compiler being maliciously modified to introduce covert backdoors in programs it compiles or even further replicate the malicious modification in future versions of the compiler itself, creating a perpetual cycle of distrust ) and various attacks against binary trustworthiness, multiple projects are working to
|
Bootstrapping (compilers)
|
wikipedia
|
##ly modified to introduce covert backdoors in programs it compiles or even further replicate the malicious modification in future versions of the compiler itself, creating a perpetual cycle of distrust ) and various attacks against binary trustworthiness, multiple projects are working to reduce the effort for not only bootstrapping from source but also allowing everyone to verify that source and executable correspond. these include the bootstrappable builds project and the reproducible builds project. = = see also = = self - hosting self - interpreter indirect self - modification tombstone diagram metacompiler falsework = = references = =
|
Bootstrapping (compilers)
|
wikipedia
|
the billionaire space race is the rivalry among entrepreneurs who have entered the space industry from other industries – particularly computing. this private spaceflight race involves sending privately developed rockets and vehicles to various destinations in space, often in response to government programs or to develop the space tourism sector. since 2018, the billionaire space race has primarily been between three billionaires and their respective firms : jeff bezos's blue origin, which is seeking to establish an industrial base in space, and his kuiper systems subsidiary of amazon seeking to provide satellite - based internet richard branson's virgin group ( through virgin galactic and the now cancelled virgin orbit ), which seeks to dominate space tourism, low - cost small orbital launch vehicles, and intercontinental sub - orbital spaceflight. elon musk's spacex, which seeks to colonize mars as well as provide satellite - based internet through its starlink project. prior to his death in 2018, paul allen was also a major player in the billionaire space race through the aerospace division of his firm vulcan and his financing of programs such as scaled composites tier one. allen sought to reduce the cost of launching payloads into orbit. = = background = = the groundwork for the billionaire space race and private spaceflight was arguably laid by peter diamandis, an american entrepreneur. in the 1980s, he founded an american national student space society, the students for the exploration and development of space ( seds ). later, jeff bezos became a chapter president of seds. in the 1990s, diamandis, disappointed with the state of space development, spurred it on and sparked the suborbital space tourism market, by initiating a prize, the x prize. this led to paul allen becoming involved in the competition, creating the scaled composites tier one platform of spaceshipone and white knight one which won the ansari x - prize in 2004. the technology of the winning entrant was then licensed by richard branson's virgin group as a basis to found virgin galactic. the base techniques of tier one also form the basis for stratolaunch systems ( formerly of vulcan aerospace ). elon musk's spacex was established in 2002, last among the three main rivals. speaking at cape canaveral space force station and without reference to private spaceflight, elon musk expressed excitement for a new space race in 2018. government programs have also fueled the billionaire space race. nasa programs such as the commercial crew program ( created in 2010, with grants mostly won by spacex and
|
Billionaire space race
|
wikipedia
|
to private spaceflight, elon musk expressed excitement for a new space race in 2018. government programs have also fueled the billionaire space race. nasa programs such as the commercial crew program ( created in 2010, with grants mostly won by spacex and partially by blue origin ) and the artemis hls program ( awarded to spacex in 2021 and also to blue origin in 2023 ) have pushed the billionaires to compete against each other to be selected for those multi - billion dollar procurement programs. the competition has also resulted in court battles such as blue origin v. united states & spacex. those government programs have provided critical funding for the new private space industry and its development. = = major milestones = = 21 june 2004 – scaled composites tier one, funded by paul allen, achieves the first entirely privately funded crewed flight to space ( suborbital, crossing the 100 km karman line ) with the spaceshipone flight 15p. the program won the ansari x prize later that year. 30 may 2020 – spacex successfully launches a falcon 9 rocket carrying the crew dragon space capsule during the demo - 2 mission, marking the first privately developed crewed mission to orbit and to visit the international space station ( iss ). 11 july 2021 – richard branson made a successful sub - orbital spaceflight as a member of virgin galactic unity 22. 20 july 2021 – jeff bezos made a successful sub - orbital spaceflight aboard blue origin's ns - 16, becoming the first billionaire space company founder to cross the karman line. 16 september 2021 – spacex operates the inspiration4 mission, the first orbital spaceflight with only private citizens aboard. april 2023 – on its inaugural flight, starship becomes the most powerful launch vehicle ever flown. may 2024 – boosters ( 1st stage ) of the falcon 9 family of rockets have been reused over 300 times. certification is in progress to be able to reuse a single booster 40 times. september 2024 – spacex operates the polaris dawn mission, which performs the first private spacewalk and becomes the furthest crewed mission from earth since apollo 17. january 2025 – first launch of new glenn, blue origin's heavy lift reusable launcher contracted for the u. s. nasa artemis program, nssl, and other planetary missions. april 2025 at 01 : 46 ( utc ), fram2 launched aboard a spacex falcon 9 rocket, becoming the first crewed spaceflight to enter a polar retrograde orbit, i. e., to
|
Billionaire space race
|
wikipedia
|
, and other planetary missions. april 2025 at 01 : 46 ( utc ), fram2 launched aboard a spacex falcon 9 rocket, becoming the first crewed spaceflight to enter a polar retrograde orbit, i. e., to fly over earth's poles. = = rivalries = = = = = spacex vs. blue origin = = = spacex and blue origin have had a history of conflict. blue origin and spacex have had dueling press releases that compete with each other's announcements and events. spacex and blue origin battled for the right to lease kennedy space center lc - 39a, the rocket launch platform that was used to launch the apollo moon missions. spacex won the lease in 2013, but blue origin filed suit in court against that. it is currently in the hands of spacex, while blue origin rented slc - 36 instead. spacex filed suit against blue origin to invalidate their patent on landing rockets aboard ships at sea. they won their court fight in 2014. spacex had been attempting to land rockets at sea since 2014, finally succeeding in 2016, before blue origin first built a sea - going platform for landing rockets. spacex and blue origin got into a twitter battle about the meaning of a used rocket, landed rocket, and spacerocket, at the end of 2015, when new shepard successfully landed, after a suborbital jaunt into space. spacex had previously launched and landed its grasshopper rocket multiple times without reaching space. then spacex landed a falcon 9 first stage, which had been used to launch a satellite into orbit, prompting more twitter battles at the start of 2016, such as bezos tweeting " welcome to the club ". in late 2016, blue origin announced the new glenn, directly competing against spacex's falcon heavy, with a larger rocket but lower payload. at the 2016 international astronautical congress in guadalajara, mexico, blue origin president rob meyerson elaborated on the bezos vision previously outlined in the new glenn announcement. the blue origin new armstrong would be similar in function to the spacex interplanetary transport system that elon musk unveiled at the same conference. in april 2021, spacex beat blue origin to a $ 2. 9 billion contract to build the lunar lander for nasa's artemis program. in august 2021, blue origin subsequently began a legal case against nasa and spacex in the court of federal claims, which was dismissed in november of the same year. about two years later in may 202
|
Billionaire space race
|
wikipedia
|
lunar lander for nasa's artemis program. in august 2021, blue origin subsequently began a legal case against nasa and spacex in the court of federal claims, which was dismissed in november of the same year. about two years later in may 2023, nasa awarded blue origin a $ 3. 4 billion contract to develop a competing moon lander, noting that " adding another human landing system partner to nasa's artemis program will increase competition, reduce costs to taxpayers, support a regular cadence of lunar landings, further invest in the lunar economy, and help nasa achieve its goals on and around the moon in preparation for future astronaut missions to mars. " = = = blue origin vs. virgin galactic = = = blue origin and virgin galactic are in the same market, suborbital space tourism, with the space capsule new shepard and the spaceplane spaceshiptwo, respectively. the two systems made their first flights with multiple passengers within 10 days of each other : spaceshiptwo flew on july 11, 2021, and new shepard followed on july 20, both carrying their billionaire founders and a few other passengers. as of july 2023, spaceshiptwo has made three tourism flights with two pilots and four passengers each, while new shepard has made six flights with six passengers each. in may 2023, richard branson ended virgin orbit in bankruptcy, and then in december 2023, he announced that he would not invest more money in virgin galactic. having already put one billion dollars into the project, he said that they should have enough money to continue without more from him. = = former rivalries = = = = = stratolaunch vs. virgin orbit = = = the stratolaunch rivalries are no longer part of the billionaire space race, after 2019, having been suspended at the time of paul allen's death. the stratolaunch company has since continued operations under new ownership, but does not focus on orbital space launches anymore. vulcan aerospace subsidiary stratolaunch systems planned to air - launch satellite launcher rockets, the same profile as planned by virgin orbit for its launcherone operations. while launcherone was developed and launch aircraft procured ( once white knight two, now 747 cosmic girl ), the scaled composites " roc " model 351 is still being developed ( as of 2022 ) and the rocket to mate to it ( the company has refocused away from orbital spaceflight ) has yet to be selected. after the death of paul allen in 2018, stratolaunch was sold
|
Billionaire space race
|
wikipedia
|
being developed ( as of 2022 ) and the rocket to mate to it ( the company has refocused away from orbital spaceflight ) has yet to be selected. after the death of paul allen in 2018, stratolaunch was sold, and is no longer a billionaire insurgent venture. = = criticism = = the critical response to space tourism has lambasted billionaire founders ( e. g., richard branson and jeff bezos ) and questioned their environmental, financial, and social / ethical practices. = = see also = = the space barons, 2018 book by christian davenport cold war space race ; between the us and ussr ; leading to the race to the moon space launch market competition commercialization of space mars race list of billionaire spacetravellers = = references = = = = further reading = = bernstein, joshua d. ( may 2024 ). " the billionaire space race : internet memes and the netizen response to space tourism ". annals of tourism research empirical insights. 5 ( 1 ) : 100122. doi : 10. 1016 / j. annale. 2024. 100122. davenport, christian ( 2018 ). the space barons : elon musk, jeff bezos, and the quest to colonize the cosmos. publicaffairs. isbn 978 - 1610398299. fernholz, tim ( 2018 ). rocket billionaires : elon musk, jeff bezos, and the new space race. houghton mifflin harcourt. isbn 978 - 1328662231. guthrie, julian ( 2016 ). how to make a spaceship : a band of renegades, an epic race, and the birth of private spaceflight. penguin books. isbn 978 - 1594206726. vance, ashlee ( 2015 ). elon musk : tesla, spacex, and the quest for a fantastic future. ecco. isbn 978 - 0062301239.
|
Billionaire space race
|
wikipedia
|
the international psychopharmacology algorithm project ( ipap ) is a non - profit corporation whose purpose is to " enable, enhance, and propagate " use of algorithms for the treatment of some axis i psychiatric disorders. kenneth o jobson founded the project. the dean foundation provides funding. ipap has organized and supported several international conferences on psychopharmacology algorithms. it has also supported the creation of several algorithms based on expert opinion. it is now in the process of creating " evidence - based algorithms, " that is algorithms created by experts and annotated with the evidence that leads to these algorithms. a schizophrenia algorithm has been created and one on post traumatic stress disorder ( ptsd ) was released in july 2005. a general anxiety disorder ( gad ) algorithm was released in 2006. periodic updates of the algorithms are released as the basis of evidence changes. in addition, the algorithms are being translated into various non - english languages ( chinese, japanese, spanish, and thai ) as the availability of translators permits. = = references = = = = external links = = official website
|
International Psychopharmacology Algorithm Project
|
wikipedia
|
the premotor theory of attention is a theory in cognitive neuroscience proposing that when attention is shifted, the brain engages a motor plan to move to engage with that focus. one line of evidence for this theory comes from neurophysiological recordings in the frontal eye fields and superior colliculus. neurons in these areas are typically activated during eye movements, and electrical stimulation of these regions can generate eye movements. another line of evidence comes from behavioural findings, showing that upcoming eye movements facilitate perception. = = references = =
|
Premotor theory of attention
|
wikipedia
|
instrumentation is a collective term for measuring instruments, used for indicating, measuring, and recording physical quantities. it is also a field of study about the art and science about making measurement instruments, involving the related areas of metrology, automation, and control theory. the term has its origins in the art and science of scientific instrument - making. instrumentation can refer to devices as simple as direct - reading thermometers, or as complex as multi - sensor components of industrial control systems. instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use ( e. g., smoke detectors and thermostats ). = = measurement parameters = = instrumentation is used to measure many parameters ( physical values ), including : pressure, either differential or static flow temperature levels of liquids, etc. moisture or humidity density viscosity ionising radiation frequency current voltage inductance capacitance resistivity chemical composition chemical properties toxic gases position vibration weight = = history = = the history of instrumentation can be divided into several phases. = = = pre - industrial = = = elements of industrial instrumentation have long histories. scales for comparing weights and simple pointers to indicate position are ancient technologies. some of the earliest measurements were of time. one of the oldest water clocks was found in the tomb of the ancient egyptian pharaoh amenhotep i, buried around 1500 bce. improvements were incorporated in the clocks. by 270 bce they had the rudiments of an automatic control system device. in 1663 christopher wren presented the royal society with a design for a " weather clock ". a drawing shows meteorological sensors moving pens over paper driven by clockwork. such devices did not become standard in meteorology for two centuries. the concept has remained virtually unchanged as evidenced by pneumatic chart recorders, where a pressurized bellows displaces a pen. integrating sensors, displays, recorders, and controls was uncommon until the industrial revolution, limited by both need and practicality. = = = early industrial = = = early systems used direct process connections to local control panels for control and indication, which from the early 1930s saw the introduction of pneumatic transmitters and automatic 3 - term ( pid ) controllers. the ranges of pneumatic transmitters were defined by the need to control valves and actuators in the field. typically, a signal ranged from 3 to 15 psi ( 20 to 100kpa or 0. 2 to 1. 0 kg / cm2 ) as a standard, was standardized with 6
|
Instrumentation
|
wikipedia
|
the need to control valves and actuators in the field. typically, a signal ranged from 3 to 15 psi ( 20 to 100kpa or 0. 2 to 1. 0 kg / cm2 ) as a standard, was standardized with 6 to 30 psi occasionally being used for larger valves. transistor electronics enabled wiring to replace pipes, initially with a range of 20 to 100ma at up to 90v for loop powered devices, reducing to 4 to 20ma at 12 to 24v in more modern systems. a transmitter is a device that produces an output signal, often in the form of a 4 – 20 ma electrical current signal, although many other options using voltage, frequency, pressure, or ethernet are possible. the transistor was commercialized by the mid - 1950s. instruments attached to a control system provided signals used to operate solenoids, valves, regulators, circuit breakers, relays and other devices. such devices could control a desired output variable, and provide either remote monitoring or automated control capabilities. each instrument company introduced their own standard instrumentation signal, causing confusion until the 4 – 20 ma range was used as the standard electronic instrument signal for transmitters and valves. this signal was eventually standardized as ansi / isa s50, " compatibility of analog signals for electronic industrial process instruments ", in the 1970s. the transformation of instrumentation from mechanical pneumatic transmitters, controllers, and valves to electronic instruments reduced maintenance costs as electronic instruments were more dependable than mechanical instruments. this also increased efficiency and production due to their increase in accuracy. pneumatics enjoyed some advantages, being favored in corrosive and explosive atmospheres. = = = automatic process control = = = in the early years of process control, process indicators and control elements such as valves were monitored by an operator, that walked around the unit adjusting the valves to obtain the desired temperatures, pressures, and flows. as technology evolved pneumatic controllers were invented and mounted in the field that monitored the process and controlled the valves. this reduced the amount of time process operators needed to monitor the process. latter years, the actual controllers were moved to a central room and signals were sent into the control room to monitor the process and outputs signals were sent to the final control element such as a valve to adjust the process as needed. these controllers and indicators were mounted on a wall called a control board. the operators stood in front of this board walking back and forth monitoring the process indicators. this again reduced the number and amount of time process operators were needed to walk around the units
|
Instrumentation
|
wikipedia
|
. these controllers and indicators were mounted on a wall called a control board. the operators stood in front of this board walking back and forth monitoring the process indicators. this again reduced the number and amount of time process operators were needed to walk around the units. the most standard pneumatic signal level used during these years was 3 – 15 psig. = = = large integrated computer - based systems = = = process control of large industrial plants has evolved through many stages. initially, control would be from panels local to the process plant. however, this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. the next logical development was the transmission of all plant measurements to a permanently staffed central control room. effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easy overview of the process. often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. however, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. with coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer - based algorithms, hosted on a network of input / output racks with their own control processors. these could be distributed around plant, and communicate with the graphic display in the control room or rooms. the distributed control concept was born. the introduction of dcss and scada allowed easy interconnection and re - configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. it enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. = = application = = in some cases, the sensor is a very minor element of the mechanism. digital cameras and wristwatches might technically meet the loose definition of instrumentation because they record and / or display sensed information. under most circumstances neither would be called instrumentation, but when used to measure the elapsed time of a race and to document the winner at the finish line, both would be called instrumentation. = = = household = = = a very simple example of an instrumentation system is a mechanical thermostat, used
|
Instrumentation
|
wikipedia
|
to measure the elapsed time of a race and to document the winner at the finish line, both would be called instrumentation. = = = household = = = a very simple example of an instrumentation system is a mechanical thermostat, used to control a household furnace and thus to control room temperature. a typical unit senses temperature with a bi - metallic strip. it displays temperature by a needle on the free end of the strip. it activates the furnace by a mercury switch. as the switch is rotated by the strip, the mercury makes physical ( and thus electrical ) contact between electrodes. another example of an instrumentation system is a home security system. such a system consists of sensors ( motion detection, switches to detect door openings ), simple algorithms to detect intrusion, local control ( arm / disarm ) and remote monitoring of the system so that the police can be summoned. communication is an inherent part of the design. kitchen appliances use sensors for control. a refrigerator maintains a constant temperature by actuating the cooling system when the temperature becomes too high. an automatic ice machine makes ice until a limit switch is thrown. pop - up bread toasters allow the time to be set. non - electronic gas ovens will regulate the temperature with a thermostat controlling the flow of gas to the gas burner. these may feature a sensor bulb sited within the main chamber of the oven. in addition, there may be a safety cut - off flame supervision device : after ignition, the burner's control knob must be held for a short time in order for a sensor to become hot, and permit the flow of gas to the burner. if the safety sensor becomes cold, this may indicate the flame on the burner has become extinguished, and to prevent a continuous leak of gas the flow is stopped. electric ovens use a temperature sensor and will turn on heating elements when the temperature is too low. more advanced ovens will actuate fans in response to temperature sensors, to distribute heat or to cool. a common toilet refills the water tank until a float closes the valve. the float is acting as a water level sensor. = = = automotive = = = modern automobiles have complex instrumentation. in addition to displays of engine rotational speed and vehicle linear speed, there are also displays of battery voltage and current, fluid levels, fluid temperatures, distance traveled, and feedback of various controls ( turn signals, parking brake, headlights, transmission position ). cautions may be displayed for special problems ( fuel low, check engine,
|
Instrumentation
|
wikipedia
|
also displays of battery voltage and current, fluid levels, fluid temperatures, distance traveled, and feedback of various controls ( turn signals, parking brake, headlights, transmission position ). cautions may be displayed for special problems ( fuel low, check engine, tire pressure low, door ajar, seat belt unfastened ). problems are recorded so they can be reported to diagnostic equipment. navigation systems can provide voice commands to reach a destination. automotive instrumentation must be cheap and reliable over long periods in harsh environments. there may be independent airbag systems that contain sensors, logic and actuators. anti - skid braking systems use sensors to control the brakes, while cruise control affects throttle position. a wide variety of services can be provided via communication links on the onstar system. autonomous cars ( with exotic instrumentation ) have been shown. = = = aircraft = = = early aircraft had a few sensors. " steam gauges " converted air pressures into needle deflections that could be interpreted as altitude and airspeed. a magnetic compass provided a sense of direction. the displays to the pilot were as critical as the measurements. a modern aircraft has a far more sophisticated suite of sensors and displays, which are embedded into avionics systems. the aircraft may contain inertial navigation systems, global positioning systems, weather radar, autopilots, and aircraft stabilization systems. redundant sensors are used for reliability. a subset of the information may be transferred to a crash recorder to aid mishap investigations. modern pilot displays now include computer displays including head - up displays. air traffic control radar is a distributed instrumentation system. the ground part sends an electromagnetic pulse and receives an echo ( at least ). aircraft carry transponders that transmit codes on reception of the pulse. the system displays an aircraft map location, an identifier and optionally altitude. the map location is based on sensed antenna direction and sensed time delay. the other information is embedded in the transponder transmission. = = = laboratory instrumentation = = = among the possible uses of the term is a collection of laboratory test equipment controlled by a computer through an ieee - 488 bus ( also known as gpib for general purpose instrument bus or hpib for hewlitt packard instrument bus ). laboratory equipment is available to measure many electrical and chemical quantities. such a collection of equipment might be used to automate the testing of drinking water for pollutants. = = instrumentation engineering = = instrumentation engineering is the engineering specialization focused on the principle and operation of measuring instruments that are used in design
|
Instrumentation
|
wikipedia
|
chemical quantities. such a collection of equipment might be used to automate the testing of drinking water for pollutants. = = instrumentation engineering = = instrumentation engineering is the engineering specialization focused on the principle and operation of measuring instruments that are used in design and configuration of automated systems in areas such as electrical and pneumatic domains, and the control of quantities being measured. they typically work for industries with automated processes, such as chemical or manufacturing plants, with the goal of improving system productivity, reliability, safety, optimization and stability. to control the parameters in a process or in a particular system, devices such as microprocessors, microcontrollers or plcs are used, but their ultimate aim is to control the parameters of a system. instrumentation engineering is loosely defined because the required tasks are very domain dependent. an expert in the biomedical instrumentation of laboratory rats has very different concerns than the expert in rocket instrumentation. common concerns of both are the selection of appropriate sensors based on size, weight, cost, reliability, accuracy, longevity, environmental robustness, and frequency response. some sensors are literally fired in artillery shells. others sense thermonuclear explosions until destroyed. invariably sensor data must be recorded, transmitted or displayed. recording rates and capacities vary enormously. transmission can be trivial or can be clandestine, encrypted and low power in the presence of jamming. displays can be trivially simple or can require consultation with human factors experts. control system design varies from trivial to a separate specialty. instrumentation engineers are responsible for integrating the sensors with the recorders, transmitters, displays or control systems, and producing the piping and instrumentation diagram for the process. they may design or specify installation, wiring and signal conditioning. they may be responsible for commissioning, calibration, testing and maintenance of the system. in a research environment it is common for subject matter experts to have substantial instrumentation system expertise. an astronomer knows the structure of the universe and a great deal about telescopes – optics, pointing and cameras ( or other sensing elements ). that often includes the hard - won knowledge of the operational procedures that provide the best results. for example, an astronomer is often knowledgeable of techniques to minimize temperature gradients that cause air turbulence within the telescope. instrumentation technologists, technicians and mechanics specialize in troubleshooting, repairing and maintaining instruments and instrumentation systems. = = = typical industrial transmitter signal types = = = pneumatic loop ( 20 - 100kpa / 3 - 15psi ) – pneumatic current loop
|
Instrumentation
|
wikipedia
|
mechanics specialize in troubleshooting, repairing and maintaining instruments and instrumentation systems. = = = typical industrial transmitter signal types = = = pneumatic loop ( 20 - 100kpa / 3 - 15psi ) – pneumatic current loop ( 4 - 20ma ) – electrical hart – data signalling, often overlaid on a current loop foundation fieldbus – data signalling profibus – data signalling = = impact of modern development = = ralph muller ( 1940 ) stated, " that the history of physical science is largely the history of instruments and their intelligent use is well known. the broad generalizations and theories which have arisen from time to time have stood or fallen on the basis of accurate measurement, and in several instances new instruments have had to be devised for the purpose. there is little evidence to show that the mind of modern man is superior to that of the ancients. his tools are incomparably better. " : 290 davis baird has argued that the major change associated with floris cohen's identification of a " fourth big scientific revolution " after world war ii is the development of scientific instrumentation, not only in chemistry but across the sciences. in chemistry, the introduction of new instrumentation in the 1940s was " nothing less than a scientific and technological revolution " : 28 – 29 in which classical wet - and - dry methods of structural organic chemistry were discarded, and new areas of research opened up. : 38 as early as 1954, w. a. wildhack discussed both the productive and destructive potential inherent in process control. the ability to make precise, verifiable and reproducible measurements of the natural world, at levels that were not previously observable, using scientific instrumentation, has " provided a different texture of the world ". this instrumentation revolution fundamentally changes human abilities to monitor and respond, as is illustrated in the examples of ddt monitoring and the use of uv spectrophotometry and gas chromatography to monitor water pollutants. = = see also = = = = references = = = = external links = =
|
Instrumentation
|
wikipedia
|
eigenmoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. their application can be found in signal processing and computer vision as descriptors of the signal or image. the descriptors can later be used for classification purposes. it is obtained by performing orthogonalization, via eigen analysis on geometric moments. = = framework summary = = eigenmoments are computed by performing eigen analysis on the moment space of an image by maximizing signal - to - noise ratio in the feature space in form of rayleigh quotient. this approach has several benefits in image processing applications : dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space. the ability of eigenmoments to take into account distribution of the image makes it more versatile and adaptable for different genres. generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes. nosiy components can be removed. this makes eigenmoments robust for classification applications. optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images. = = problem formulation = = assume that a signal vector s ∈ r n { \ displaystyle s \ in { \ mathcal { r } } ^ { n } } is taken from a certain distribution having correlation c ∈ r n × n { \ displaystyle c \ in { \ mathcal { r } } ^ { n \ times n } }, i. e. c = e [ s s t ] { \ displaystyle c = e [ ss ^ { t } ] } where e [. ] denotes expected value. dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality. this is performed by a two - step linear transformation : q = w t x t s, { \ displaystyle q = w ^ { t } x ^ { t } s, } where q = [ q 1,..., q n ] t ∈ r k { \ displaystyle q = [ q _ { 1 },..., q _ { n } ] ^ { t } \ in { \ mathcal { r } } ^ { k
|
Eigenmoments
|
wikipedia
|
.., q n ] t ∈ r k { \ displaystyle q = [ q _ { 1 },..., q _ { n } ] ^ { t } \ in { \ mathcal { r } } ^ { k } } is the transformed signal, x = [ x 1,..., x n ] t ∈ r n × m { \ displaystyle x = [ x _ { 1 },..., x _ { n } ] ^ { t } \ in { \ mathcal { r } } ^ { n \ times m } } a fixed transformation matrix which transforms the signal into the moment space, and w = [ w 1,..., w n ] t ∈ r m × k { \ displaystyle w = [ w _ { 1 },..., w _ { n } ] ^ { t } \ in { \ mathcal { r } } ^ { m \ times k } } the transformation matrix which we are going to determine by maximizing the snr of the feature space resided by q { \ displaystyle q }. for the case of geometric moments, x would be the monomials. if m = k = n { \ displaystyle m = k = n }, a full rank transformation would result, however usually we have m ≤ n { \ displaystyle m \ leq n } and k ≤ m { \ displaystyle k \ leq m }. this is specially the case when n { \ displaystyle n } is of high dimensions. finding w { \ displaystyle w } that maximizes the snr of the feature space : s n r t r a n s f o r m = w t x t c x w w t x t n x w, { \ displaystyle snr _ { transform } = { \ frac { w ^ { t } x ^ { t } cxw } { w ^ { t } x ^ { t } nxw } }, } where n is the correlation matrix of the noise signal. the problem can thus be formulated as w 1,..., w k = a r g m a x w w t x t c x w w t x t n x w { \ displaystyle { w _ { 1 },..., w _ { k } } = argmax _ { w } { \ frac { w ^ { t } x ^ { t } cxw } { w ^
|
Eigenmoments
|
wikipedia
|
\ displaystyle { w _ { 1 },..., w _ { k } } = argmax _ { w } { \ frac { w ^ { t } x ^ { t } cxw } { w ^ { t } x ^ { t } nxw } } } subject to constraints : w i t x t n x w j = δ i j, { \ displaystyle w _ { i } ^ { t } x ^ { t } nxw _ { j } = \ delta _ { ij }, } where δ i j { \ displaystyle \ delta _ { ij } } is the kronecker delta. it can be observed that this maximization is rayleigh quotient by letting a = x t c x { \ displaystyle a = x ^ { t } cx } and b = x t n x { \ displaystyle b = x ^ { t } nx } and therefore can be written as : w 1,..., w k = a r g m a x x w t a w w t b w { \ displaystyle { w _ { 1 },..., w _ { k } } = { \ underset { x } { \ operatorname { arg \, max } } } { \ frac { w ^ { t } aw } { w ^ { t } bw } } }, w i t b w j = δ i j { \ displaystyle w _ { i } ^ { t } bw _ { j } = \ delta _ { ij } } = = = rayleigh quotient = = = optimization of rayleigh quotient has the form : max w r ( w ) = max w w t a w w t b w { \ displaystyle \ max _ { w } r ( w ) = \ max _ { w } { \ frac { w ^ { t } aw } { w ^ { t } bw } } } and a { \ displaystyle a } and b { \ displaystyle b }, both are symmetric and b { \ displaystyle b } is positive definite and therefore invertible. scaling w { \ displaystyle w } does not change the value of the object function and hence and additional scalar constraint w t b w = 1 { \ displaystyle w ^ { t } bw = 1 } can be imposed on w { \ displaystyle w } and no solution would
|
Eigenmoments
|
wikipedia
|
not change the value of the object function and hence and additional scalar constraint w t b w = 1 { \ displaystyle w ^ { t } bw = 1 } can be imposed on w { \ displaystyle w } and no solution would be lost when the objective function is optimized. this constraint optimization problem can be solved using lagrangian multiplier : max w w t a w { \ displaystyle \ max _ { w } { w ^ { t } aw } } subject to w t b w = 1 { \ displaystyle { w ^ { t } bw } = 1 } max w l ( w ) = max w ( w t a w − λ w t b w ) { \ displaystyle \ max _ { w } { \ mathcal { l } } ( w ) = \ max _ { w } ( w { t } aw - \ lambda w ^ { t } bw ) } equating first derivative to zero and we will have : a w = λ b w { \ displaystyle aw = \ lambda bw } which is an instance of generalized eigenvalue problem ( gep ). the gep has the form : a w = λ b w { \ displaystyle aw = \ lambda bw } for any pair ( w, λ ) { \ displaystyle ( w, \ lambda ) } that is a solution to above equation, w { \ displaystyle w } is called a generalized eigenvector and λ { \ displaystyle \ lambda } is called a generalized eigenvalue. finding w { \ displaystyle w } and λ { \ displaystyle \ lambda } that satisfies this equations would produce the result which optimizes rayleigh quotient. one way of maximizing rayleigh quotient is through solving the generalized eigen problem. dimension reduction can be performed by simply choosing the first components w i { \ displaystyle w _ { i } }, i = 1,..., k { \ displaystyle i = 1,..., k }, with the highest values for r ( w ) { \ displaystyle r ( w ) } out of the m { \ displaystyle m } components, and discard the rest. interpretation of this transformation is rotating and scaling the moment space, transforming it into a feature space with maximized snr and therefore, the first k { \ displaystyle k } components are the components with highest k { \ displaystyle k } snr
|
Eigenmoments
|
wikipedia
|
interpretation of this transformation is rotating and scaling the moment space, transforming it into a feature space with maximized snr and therefore, the first k { \ displaystyle k } components are the components with highest k { \ displaystyle k } snr values. the other method to look at this solution is to use the concept of simultaneous diagonalization instead of generalized eigen problem. = = = simultaneous diagonalization = = = let a = x t c x { \ displaystyle a = x ^ { t } cx } and b = x t n x { \ displaystyle b = x ^ { t } nx } as mentioned earlier. we can write w { \ displaystyle w } as two separate transformation matrices : w = w 1 w 2. { \ displaystyle w = w _ { 1 } w _ { 2 }. } w 1 { \ displaystyle w _ { 1 } } can be found by first diagonalize b : p t b p = d b { \ displaystyle p ^ { t } bp = d _ { b } }. where d b { \ displaystyle d _ { b } } is a diagonal matrix sorted in increasing order. since b { \ displaystyle b } is positive definite, thus d b > 0 { \ displaystyle d _ { b } > 0 }. we can discard those eigenvalues that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those eigenvectors that have large eigenvalues. let p ^ { \ displaystyle { \ hat { p } } } be the first k { \ displaystyle k } columns of p { \ displaystyle p }, now p t ^ b p ^ = d b ^ { \ displaystyle { \ hat { p ^ { t } } } b { \ hat { p } } = { \ hat { d _ { b } } } } where d b ^ { \ displaystyle { \ hat { d _ { b } } } } is the k × k { \ displaystyle k \ times k } principal submatrix of d b { \ displaystyle d _ { b } }. let w 1 = p ^ d b ^ − 1 / 2 { \ displaystyle w _ { 1 } = { \ hat { p } } { \ hat { d _ { b } } } ^ { - 1 / 2 } } and hence
|
Eigenmoments
|
wikipedia
|
w 1 = p ^ d b ^ − 1 / 2 { \ displaystyle w _ { 1 } = { \ hat { p } } { \ hat { d _ { b } } } ^ { - 1 / 2 } } and hence : w 1 t b w 1 = ( p ^ d b ^ − 1 / 2 ) t b ( p ^ d b ^ − 1 / 2 ) = i { \ displaystyle w _ { 1 } ^ { t } bw _ { 1 } = ( { \ hat { p } } { \ hat { d _ { b } } } ^ { - 1 / 2 } ) ^ { t } b ( { \ hat { p } } { \ hat { d _ { b } } } ^ { - 1 / 2 } ) = i }. w 1 { \ displaystyle w _ { 1 } } whiten b { \ displaystyle b } and reduces the dimensionality from m { \ displaystyle m } to k { \ displaystyle k }. the transformed space resided by q ′ = w 1 t x t s { \ displaystyle q'= w _ { 1 } ^ { t } x ^ { t } s } is called the noise space. then, we diagonalize w 1 t a w 1 { \ displaystyle w _ { 1 } ^ { t } aw _ { 1 } } : w 2 t w 1 t a w 1 w 2 = d a { \ displaystyle w _ { 2 } ^ { t } w _ { 1 } ^ { t } aw _ { 1 } w _ { 2 } = d _ { a } }, where w 2 t w 2 = i { \ displaystyle w _ { 2 } ^ { t } w _ { 2 } = i }. d a { \ displaystyle d _ { a } } is the matrix with eigenvalues of w 1 t a w 1 { \ displaystyle w _ { 1 } ^ { t } aw _ { 1 } } on its diagonal. we may retain all the eigenvalues and their corresponding eigenvectors since most of the noise are already discarded in previous step. finally the transformation is given by : w = w 1 w 2 { \ displaystyle w = w _ { 1 } w _ { 2 } } where w { \ displaystyle w } diagonalizes both the numerator and denominator of the snr, w t a w = d a { \ displaystyle w
|
Eigenmoments
|
wikipedia
|
##tyle w = w _ { 1 } w _ { 2 } } where w { \ displaystyle w } diagonalizes both the numerator and denominator of the snr, w t a w = d a { \ displaystyle w ^ { t } aw = d _ { a } }, w t b w = i { \ displaystyle w ^ { t } bw = i } and the transformation of signal s { \ displaystyle s } is defined as q = w t x t s = w 2 t w 1 t x t s { \ displaystyle q = w ^ { t } x ^ { t } s = w _ { 2 } ^ { t } w _ { 1 } ^ { t } x ^ { t } s }. = = = information loss = = = to find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis : η = 1 − t r a c e ( w 1 t a w 1 ) t r a c e ( d b − 1 / 2 p t a p d b − 1 / 2 ) = 1 − t r a c e ( d b ^ − 1 / 2 p ^ t a p ^ d b ^ − 1 / 2 ) t r a c e ( d b − 1 / 2 p t a p d b − 1 / 2 ) { \ displaystyle { \ begin { array } { lll } \ eta & = & 1 - { \ frac { trace ( w _ { 1 } ^ { t } aw _ { 1 } ) } { trace ( d _ { b } ^ { - 1 / 2 } p ^ { t } apd _ { b } ^ { - 1 / 2 } ) } } \ \ & = & 1 - { \ frac { trace ( { \ hat { d _ { b } } } ^ { - 1 / 2 } { \ hat { p } } ^ { t } a { \ hat { p } } { \ hat { d _ { b } } } ^ { - 1 / 2 } ) } { trace ( d _ { b } ^ { - 1 / 2 } p ^ { t } apd _ { b } ^ { - 1 / 2 } ) } } \ end { array } } } = = eigenmoments = = eigenmoments are derived by applying the above framework on geometric moments. they can be derived for both 1d
|
Eigenmoments
|
wikipedia
|
b } ^ { - 1 / 2 } ) } } \ end { array } } } = = eigenmoments = = eigenmoments are derived by applying the above framework on geometric moments. they can be derived for both 1d and 2d signals. = = = 1d signal = = = if we let x = [ 1, x, x 2,..., x m − 1 ] { \ displaystyle x = [ 1, x, x ^ { 2 },..., x ^ { m - 1 } ] }, i. e. the monomials, after the transformation x t { \ displaystyle x ^ { t } } we obtain geometric moments, denoted by vector m { \ displaystyle m }, of signal s = [ s ( x ) ] { \ displaystyle s = [ s ( x ) ] }, i. e. m = x t s { \ displaystyle m = x ^ { t } s }. in practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized. one such model can be defined as : r ( x 1, x 2 ) = r ( 0, 0 ) e − c ( x 1 − x 2 ) 2 { \ displaystyle r ( x _ { 1 }, x _ { 2 } ) = r ( 0, 0 ) e ^ { - c ( x _ { 1 } - x _ { 2 } ) ^ { 2 } } }, where r ( 0, 0 ) = e [ t r ( s s t ) ] { \ displaystyle r ( 0, 0 ) = e [ tr ( ss ^ { t } ) ] }. this model of correlation can be replaced by other models however this model covers general natural images. since r ( 0, 0 ) { \ displaystyle r ( 0, 0 ) } does not affect the maximization it can be dropped. a = x t c x = − 1 1 − 1 1 [ x 1 j x 2 i e − c ( x 1 − x 2 ) 2 ] i, j = 0 i, j = m − 1 d x 1 d x 2 { \ displaystyle a = x ^ { t } cx = \ int _ { - 1 } ^ { 1 } \ int _ { - 1 } ^ { 1 } [ x _ { 1 } ^ { j } x _ { 2 } ^ { i } e ^ { - c
|
Eigenmoments
|
wikipedia
|
} cx = \ int _ { - 1 } ^ { 1 } \ int _ { - 1 } ^ { 1 } [ x _ { 1 } ^ { j } x _ { 2 } ^ { i } e ^ { - c ( x _ { 1 } - x _ { 2 } ) ^ { 2 } } ] _ { i, j = 0 } ^ { i, j = m - 1 } dx _ { 1 } dx _ { 2 } } the correlation of noise can be modelled as σ n 2 δ ( x 1, x 2 ) { \ displaystyle \ sigma _ { n } ^ { 2 } \ delta ( x _ { 1 }, x _ { 2 } ) }, where σ n 2 { \ displaystyle \ sigma _ { n } ^ { 2 } } is the energy of noise. again σ n 2 { \ displaystyle \ sigma _ { n } ^ { 2 } } can be dropped because the constant does not have any effect on the maximization problem. b = x t n x = − 1 1 − 1 1 [ x 1 j x 2 i δ ( x 1, x 2 ) ] i, j = 0 i, j = m − 1 d x 1 d x 2 { \ displaystyle b = x ^ { t } nx = \ int _ { - 1 } ^ { 1 } \ int _ { - 1 } ^ { 1 } [ x _ { 1 } ^ { j } x _ { 2 } ^ { i } \ delta ( x _ { 1 }, x _ { 2 } ) ] _ { i, j = 0 } ^ { i, j = m - 1 } dx _ { 1 } dx _ { 2 } } b = x t n x = − 1 1 [ x 1 j + i ] i, j = 0 i, j = m − 1 d x 1 = x t x { \ displaystyle b = x ^ { t } nx = \ int _ { - 1 } ^ { 1 } [ x _ { 1 } ^ { j + i } ] _ { i, j = 0 } ^ { i, j = m - 1 } dx _ { 1 } = x ^ { t } x } using the computed a and b and applying the algorithm discussed in previous section we find w { \ displaystyle w } and set of transformed monomials φ = [ 1,..., k ]
|
Eigenmoments
|
wikipedia
|
x ^ { t } x } using the computed a and b and applying the algorithm discussed in previous section we find w { \ displaystyle w } and set of transformed monomials φ = [ 1,..., k ] = x w { \ displaystyle \ phi = [ \ phi _ { 1 },..., \ phi _ { k } ] = xw } which produces the moment kernels of em. the moment kernels of em decorrelate the correlation in the image. φ t c φ = ( x w ) t c ( x w ) = d c { \ displaystyle \ phi ^ { t } c \ phi = ( xw ) ^ { t } c ( xw ) = d _ { c } }, and are orthogonal : φ t φ = ( x w ) t ( x w ) = w t x t x = w t x t n x w = w t b w = i { \ displaystyle { \ begin { array } { lll } \ phi ^ { t } \ phi & = & ( xw ) ^ { t } ( xw ) \ \ & = & w ^ { t } x ^ { t } x \ \ & = & w ^ { t } x ^ { t } nxw \ \ & = & w ^ { t } bw \ \ & = & i \ \ \ end { array } } } = = = = example computation = = = = taking c = 0. 5 { \ displaystyle c = 0. 5 }, the dimension of moment space as m = 6 { \ displaystyle m = 6 } and the dimension of feature space as k = 4 { \ displaystyle k = 4 }, we will have : w = ( 0. 0 0 − 0. 7745 − 0. 8960 2. 8669 − 4. 4622 0. 0 0. 0 0. 0 0. 0 7. 9272 2. 4523 − 4. 0225 20. 6505 0. 0 0. 0 0. 0 0. 0 − 9. 2789 − 0. 1239 − 0. 5092 − 18. 4582 0. 0 0. 0 ) { \ displaystyle w = \ left ( { \ begin { array } { cccc } 0. 0 & 0 & - 0. 7745 & - 0. 8960 \ \ 2. 8669 & - 4. 4622 & 0
|
Eigenmoments
|
wikipedia
|
\ displaystyle w = \ left ( { \ begin { array } { cccc } 0. 0 & 0 & - 0. 7745 & - 0. 8960 \ \ 2. 8669 & - 4. 4622 & 0. 0 & 0. 0 \ \ 0. 0 & 0. 0 & 7. 9272 & 2. 4523 \ \ - 4. 0225 & 20. 6505 & 0. 0 & 0. 0 \ \ 0. 0 & 0. 0 & - 9. 2789 & - 0. 1239 \ \ - 0. 5092 & - 18. 4582 & 0. 0 & 0. 0 \ end { array } } \ right ) } and 1 = 2. 8669 x − 4. 0225 x 3 − 0. 5092 x 5 2 = − 4. 4622 x + 20. 6505 x 3 − 18. 4582 x 5 3 = − 0. 7745 + 7. 9272 x 2 − 9. 2789 x 4 4 = − 0. 8960 + 2. 4523 x 2 − 0. 1239 x 4 { \ displaystyle { \ begin { array } { lll } \ phi _ { 1 } & = & 2. 8669x - 4. 0225x ^ { 3 } - 0. 5092x ^ { 5 } \ \ \ phi _ { 2 } & = & - 4. 4622x + 20. 6505x ^ { 3 } - 18. 4582x ^ { 5 } \ \ \ phi _ { 3 } & = & - 0. 7745 + 7. 9272x ^ { 2 } - 9. 2789x ^ { 4 } \ \ \ phi _ { 4 } & = & - 0. 8960 + 2. 4523x ^ { 2 } - 0. 1239x ^ { 4 } \ \ \ end { array } } } = = = 2d signal = = = the derivation for 2d signal is the same as 1d signal except that conventional geometric moments are directly employed to obtain the set of 2d eigenmoments. the definition of geometric moments of order ( p + q ) { \ displaystyle ( p + q ) } for 2d image signal is : m p q = − 1 1 − 1 1 x p y q f ( x, y ) d x d y { \ displaystyle m
|
Eigenmoments
|
wikipedia
|
+ q ) { \ displaystyle ( p + q ) } for 2d image signal is : m p q = − 1 1 − 1 1 x p y q f ( x, y ) d x d y { \ displaystyle m _ { pq } = \ int _ { - 1 } ^ { 1 } \ int _ { - 1 } ^ { 1 } x ^ { p } y ^ { q } f ( x, y ) dxdy }. which can be denoted as m = { m j, i } i, j = 0 i, j = m − 1 { \ displaystyle m = \ { m _ { j, i } \ } _ { i, j = 0 } ^ { i, j = m - 1 } }. then the set of 2d eigenmoments are : ω = w t m w { \ displaystyle \ omega = w ^ { t } mw }, where ω = { ω j, i } i, j = 0 i, j = k − 1 { \ displaystyle \ omega = \ { \ omega _ { j, i } \ } _ { i, j = 0 } ^ { i, j = k - 1 } } is a matrix that contains the set of eigenmoments. ω j, i = σ r = 0 m − 1 σ s = 0 m − 1 w r, j w s, i m r, s { \ displaystyle \ omega _ { j, i } = \ sigma _ { r = 0 } ^ { m - 1 } \ sigma _ { s = 0 } ^ { m - 1 } w _ { r, j } w _ { s, i } m _ { r, s } }. = = = eigenmoment invariants ( emi ) = = = in order to obtain a set of moment invariants we can use normalized geometric moments m ^ { \ displaystyle { \ hat { m } } } instead of m { \ displaystyle m }. normalized geometric moments are invariant to rotation, scaling and transformation and defined by : m ^ p q = α p + q + 2 − 1 1 − 1 1 [ ( x − x c ) c o s ( θ ) + ( y − y c ) s i n ( θ ) ] p = × [ − ( x − x c ) s i n ( θ ) + ( y − y c ) c o s ( θ ) ] q = ×
|
Eigenmoments
|
wikipedia
|
θ ) + ( y − y c ) s i n ( θ ) ] p = × [ − ( x − x c ) s i n ( θ ) + ( y − y c ) c o s ( θ ) ] q = × f ( x, y ) d x d y, { \ displaystyle { \ begin { array } { lll } { \ hat { m } } _ { pq } & = & \ alpha ^ { p } + q + 2 \ int _ { - 1 } ^ { 1 } \ int _ { - 1 } ^ { 1 } [ ( x - x ^ { c } ) cos ( \ theta ) + ( y - y ^ { c } ) sin ( \ theta ) ] ^ { p } \ \ & = & \ times [ - ( x - x ^ { c } ) sin ( \ theta ) + ( y - y ^ { c } ) cos ( \ theta ) ] ^ { q } \ \ & = & \ times f ( x, y ) dxdy, \ \ \ end { array } } } where : ( x c, y c ) = ( m 10 / m 00, m 01 / m 00 ) { \ displaystyle ( x ^ { c }, y ^ { c } ) = ( m _ { 10 } / m _ { 00 }, m _ { 01 } / m _ { 00 } ) } is the centroid of the image f ( x, y ) { \ displaystyle f ( x, y ) } and α = [ m 00 s / m 00 ] 1 / 2 θ = 1 2 t a n − 1 2 m 11 m 20 − m 02 { \ displaystyle { \ begin { array } { lll } \ alpha & = & [ m _ { 00 } ^ { s } / m _ { 00 } ] ^ { 1 / 2 } \ \ \ theta & = & { \ frac { 1 } { 2 } } tan ^ { - 1 } { \ frac { 2m _ { 11 } } { m _ { 20 } - m _ { 02 } } } \ end { array } } }. m 00 s { \ displaystyle m _ { 00 } ^ { s } } in this equation is a scaling factor depending on the image. m 00 s { \ displaystyle m _ { 00 } ^ { s } } is usually set to 1 for binary images. = = see also = = computer
|
Eigenmoments
|
wikipedia
|
^ { s } } in this equation is a scaling factor depending on the image. m 00 s { \ displaystyle m _ { 00 } ^ { s } } is usually set to 1 for binary images. = = see also = = computer vision signal processing image moment = = references = = = = external links = = implementation of eigenmoments in matlab
|
Eigenmoments
|
wikipedia
|
the dividend puzzle, as originally framed by fischer black, relates to two interrelated questions in corporate finance and financial economics : why do corporations pay dividends ; and why do investors " pay attention " to dividends? a key observation here, is that companies that pay dividends are rewarded by investors with higher valuations ( in fact, there are several dividend valuation models ; see the theory of investment value ). what is puzzling, however, is that it should not matter to investors whether a firm pays dividends or not : as an owner of the firm, the investor should be indifferent as to receiving dividends or having these re - invested in the business ; see modigliani – miller theorem. a further and related observation is that these dividends attract a higher tax rate as compared, e. g., to capital gains from the firm repurchasing shares as an alternative payout policy. for other considerations, see dividend policy and pecking order theory. a range of explanations is provided. the long term holders of these stocks are typically institutional investors. these ( often ) have a need for the liquidity provided by dividends ; further, many, such as pension funds, are tax - exempt. ( see clientele effect. ) from the signalling perspective, cash dividends are " a useful device " to convey insider information about corporate performance to outsiders, and thereby reduce information asymmetry ; see dividend signaling hypothesis. behavioral economics posits that for investors, outcomes received with certainty are overweighed relative to uncertain outcomes ; see prospect theory. thus here, respectively, investors will prefer ( and pay for ) cash dividends, as opposed to reinvestment in the firm with possible consequent price appreciation. = = references = =
|
Dividend puzzle
|
wikipedia
|
the international risk governance center ( irgc ) is a neutral interdisciplinary center based at the ecole polytechnique federale de lausanne ( epfl ) in lausanne, switzerland. irgc develops risk governance strategies that focus on involving all key stakeholder groups, including citizens, governments, businesses and academia. it exists to improve the understanding, management and governance of emerging and systemic risks that may have significant adverse consequences for human health and the environment, the economy and society. its mission includes " developing concepts of risk governance, anticipating major risk issues and providing risk governance policy advice for key decision - makers. " = = history = = irgc began as a non - profit called the international risk governance council in 2003, when academics from various countries proposed to the swiss state secretariat for education and research to create an international and independent body with the mission to develop and implement concepts and actions to improve the governance of risk. the swiss federal assembly then created the international risk governance council to bridge increasing gaps between science, technological development, decision - makers, and the public. it was formally founded in geneva as a private foundation in june 2003. jose mariano gago, the former portuguese minister for science and higher education, was the first chairman of the foundation board followed by donald j. johnston and granger m. morgan. wolfgang kroger was the founding rector. in july 2012, the council was granted special consultative status with the united nations economic and social council ( ecosoc ). as of january 1, 2013, the international risk governance council signed a formal collaboration agreement with epfl and moved to lausanne. the goal of this move was strengthened collaboration with academia, which allowed the council to expand its academic network and further develop its science - based approach. in july 2014, it became a member of the sustainable development solutions network ( sdsn ). in 2016, the council became the international risk governance center ( irgc ) at epfl, where it continues to develop the original mission and activities. = = activities = = irgc's work is rooted in the irgc risk governance framework, which was developed to provide guidance to organizations and society for identifying and managing risks in situations of complexity, uncertainty or ambiguity. irgc develops risk governance concepts and has developed numerous frameworks, including on the governance of emerging and systemic risks. these frameworks are applied to a wide range of specific risk domains. irgc's frameworks are used by numerous institutions and organizations, including the european food safety authority, the health council of the netherlands, the us environmental protection
|
International Risk Governance Center
|
wikipedia
|
and systemic risks. these frameworks are applied to a wide range of specific risk domains. irgc's frameworks are used by numerous institutions and organizations, including the european food safety authority, the health council of the netherlands, the us environmental protection agency, the oecd, and the european commission. at epfl, irgc is one of the centers that act at the interface between academic research and education, and business and policy. irgc interacts with the epfl community and contributes a risk governance approach to their activities. in recent years, irgc has focused increasingly on risks associated with emerging technologies. currently, irgc is active in the areas of nanotechnology, climate engineering, the low - carbon transition, space debris collision risk, deepfakes, and governance of digital technology. past areas of focus include biosecurity, precision medicine, synthetic biology, unconventional gas development, bioenergy, and critical infrastructure. = = see also = = risk analysis risk governance risk management = = references = = = = external links = = official website
|
International Risk Governance Center
|
wikipedia
|
the wood - pasture hypothesis ( also known as the vera hypothesis and the megaherbivore theory ) is a scientific hypothesis positing that open and semi - open pastures and wood - pastures formed the predominant type of landscape in post - glacial temperate europe, rather than the common belief of primeval forests. the hypothesis proposes that such a landscape would be formed and maintained by large wild herbivores. although others, including landscape ecologist oliver rackham, had previously expressed similar ideas, it was the dutch researcher frans vera, who, in his 2000 book grazing ecology and forest history, first developed a comprehensive framework for such ideas and formulated them into a theory. vera's proposals, although controversial, came at a time when the role grazers played in woodlands was increasingly being reconsidered, and are credited for ushering in a period of increased reassessment and interdisciplinary research in european conservation theory and practice. although vera largely focused his research on the european situation, his findings could also be applied to other temperate ecological regions worldwide, especially the broadleaved ones. vera's ideas have met with both rejection and approval in the scientific community, and continue to lay an important foundation for the rewilding - movement. while his proposals for widespread semi - open savanna as the predominant landscape of temperate europe in the early to mid - holocene have at large been rejected, they do partially agree with the established wisdom about vegetation structure during previous interglacials. moreover, modern research has shown that, under the current climate, free - roaming large grazers can indeed influence and even temporarily halt vegetation succession. whether the holocene prior to the rise of agriculture provides an adequate approximation to a state of " pristine nature " at all has also been questioned, since by that time anatomically modern humans had already been omnipresent in europe for millennia, with in all likelihood profound effects on the environment. the severe loss of megafauna at the end of the pleistocene and beginning of the holocene known as the quaternary extinction event, which is frequently linked to human activities, did not leave europe unscathed and brought about a profound change in the european large mammal assemblage and thus ecosystems as a whole, which probably also affected vegetation patterns. the assumption, however, that the pre - neolithic represents pristine conditions is a prerequisite for both the " high forest theory " and the vera hypothesis in their respective original forms. whether or not the hypothesis is supported may thus further depend on whether or not
|
Wood-pasture hypothesis
|
wikipedia
|
assumption, however, that the pre - neolithic represents pristine conditions is a prerequisite for both the " high forest theory " and the vera hypothesis in their respective original forms. whether or not the hypothesis is supported may thus further depend on whether or not the pre - neolithic holocene is accepted as a baseline for pristine nature, and thus also on whether the quaternary extinction of megafauna is considered ( primarily ) natural or man - made. vera's hypothesis has important repercussions for nature conservation especially, because it advocates for a reorientation of emphasis away from the protection of old - growth forest ( as per the competing high forest theory ) and towards the conservation of open and semi - open grasslands and wood pastures, through extensive grazing. this aspect in particular has attracted considerable attention, and has made vera's hypothesis an important point of reference for conservation grazing and rewilding initiatives. the wood - pasture hypothesis also has points of contact with traditional agricultural practices in europe, which may conserve biodiversity in a similar way to wild herbivore herds. = = names and definitions = = frans vera's hypothesis has many names, since vera himself did not provide a distinguished name for it. instead, he simply referred to it as the alternative hypothesis, alternative to the high - forest theory, which he called the null hypothesis. as a result, it has been called by many names over the years, including the wood - pasture hypothesis, the wooded pasture hypothesis, the vera hypothesis, the temperate savanna hypothesis and the open woodland hypothesis. especially in continental europe, it is commonly known as the megaherbivore hypothesis and literal translations of it. vera limited the geographic area of his ideas to western and central europe between 45°n and 58°n latitude and 5°w and 25°e longitude. this includes most of the british isles and everything between france ( except the southern third ) and poland and southern scandinavia to the alps. furthermore, he confined it to altitudes below 700 m ( 2, 300 ft ). by extension, the north american east coast is also addressed as an analogy with a comparable climate. = = high - forest theory = = = = = heinrich cotta : high - forest theory = = = in his 1817 work anweisungen zum waldbau ( directions for silviculture ), heinrich cotta posited that if humans abandoned his native germany, in the space of 100 years it would be " covered with wood ". this assumption laid the foundation for what is now
|
Wood-pasture hypothesis
|
wikipedia
|
##dbau ( directions for silviculture ), heinrich cotta posited that if humans abandoned his native germany, in the space of 100 years it would be " covered with wood ". this assumption laid the foundation for what is now called the high - forest theory, which assumes that deciduous forests are the naturally predominant ecosystem type in the temperate, broad - leaved regions. = = = frederic clements : linear succession = = = later, this position was accompanied by clements'formulation of the theory of linear succession, meaning that under the right conditions bare ground would, over time, invariably become colonised by a succession of plant communities eventually leading to closed stands dominated by the tallest plant species. because in most of the temperate hemisphere the potentially tallest plants are trees, the final product would therefore chiefly be forest. albeit with changes in conceptualisation and some modifications, this concept remains the one favoured by most, and provides the conceptual framework for many forest - related methods and customs in forestry and conservation. this includes the prozessschutz doctrine advocated by german forest - ecologist knut sturm, which highlights the importance of non - intervention and space of time for forest protection, as it is implemented in forest reserves such as białowieza. = = = further refinements = = = clements'notion of stable climax communities was later challenged and refined by authorities such as arthur tansley, alexander watt and robert whittaker, who championed the inclusion of dynamic processes, like temporary collapse of canopy cover because of windthrow, fire or calamities, into clements'framework. this, however, did not change anything about the status of the " high - forest theory " as the commonly accepted view ; that without human intervention closed - canopy forest would dominate the global temperate regions as the potential natural vegetation. this is also the concept that was advocated by european plant experts like heinz ellenberg, johannes iversen and franz firbas. = = = the reconstruction of vegetation history = = = apart from theoretical considerations, this concept has relied and continues to rely heavily on both field observations and, more recently, on findings from pollen analysis, which allow inferences about the vegetation structure of past epochs. for example, vegetation trends can be reconstructed from the ratio of tree pollen to pollen associated with grassland. pollen analysis is the most widely used means of generating historic vegetation data and the analysis of pollen data has provided a solid database from which a predominance of forest throughout the early stages of the holocene of temperate
|
Wood-pasture hypothesis
|
wikipedia
|
to pollen associated with grassland. pollen analysis is the most widely used means of generating historic vegetation data and the analysis of pollen data has provided a solid database from which a predominance of forest throughout the early stages of the holocene of temperate europe, especially the atlantic, is generally inferred, although the possibility of regional differences remains open. on that basis, the history of vegetation in europe is generally reconstructed as a history of forest. pollen analysis, however, has been criticized for its inherent bias towards wind - pollinated plant species and, importantly, wind - pollinated trees, and has been shown to overestimate forest cover. to account for this bias, a corrective model ( reveals ) is used, whose application leads to results that differ substantially from those drawn from the traditional comparison of pollen percentages alone. alternatively to or in combination with pollen, fossil indicator organisms – such as beetles and molluscs – can be used to reconstruct vegetation structure. = = = large herbivores and high - forest theory = = = there is no general agreement on herbivores and their influence on succession in natural ecosystems in the temperate hemisphere. in the high - forest theory framework, wild herbivores are mostly considered as minor factors, derived from the assumption that the natural vegetation was forest. therefore, wild herbivores were characterised by tansley as followers of succession, not as actively influencing it, because otherwise europe would not have been forested. from this assumption the principle was developed that the natural abundance of herbivores does not hinder forest succession, which means that herbivore numbers are necessarily considered too high once as they impede natural forest regeneration. for example, wwf russia considers five to seven animals the optimal density of bison per 1000 ha ( 10 km² ), because if the population exceeds 13 animals per 1000 ha, first signs of vegetation suppression are observed. consequently, the bison population in białowieza is controlled by culling. similarly, it is widely believed that two to seven deer per 1 square kilometre ( 1, 000, 000 m2 ) is a sustainable number based on the assumption that if deer numbers exceed this bar, they start having a negative impact on woodland regeneration. consequently, culling is commonly seen as necessary to reduce a perceived overabundance of deer to sustainable levels and mimic natural predation. others, however, have criticised this view. in a 2023 publication, brice b. hanberry and edward k. faison argued that in the eastern united states, where white
|
Wood-pasture hypothesis
|
wikipedia
|
##undance of deer to sustainable levels and mimic natural predation. others, however, have criticised this view. in a 2023 publication, brice b. hanberry and edward k. faison argued that in the eastern united states, where white - tailed deer are commonly considered overabundant due to the extirpation of wolves and cougars, there are currently no more deer than there were historically when these predators were present. furthermore, they found that even at densities that are perceived as too high, the influence of deer may be ecologically beneficial. the assumption that population control through hunting is necessary in order to mimic the effect of natural predators is also not entirely supported by scientific analyses of natural predator - prey dynamics. instead, the control of herbivore numbers in nature probably depends on other factors. a perhaps more important influence predators may have on prey animals is the landscape of fear their presence can create, promoting landscape heterogeneity. however, in the presence of megafauna over 1, 000 kilograms ( 0. 98 long tons ; 1. 1 short tons ), which are largely immune to predation, even this ability is limited. overall, how ungulate populations are controlled in nature is controversial, and food availability is an important constraint, even in the presence of apex predators. in regions with relatively intact large - mammal assemblages in africa and asia, as well as in european rewilding areas where " naturalistic grazing " is practised, herbivore biomass exceeds the values commonly deemed appropriate for temperate forests many times over. here, herbivore biomass reaches a maximum of 16, 000 kilograms ( 16 long tons ; 18 short tons ) per 1 square kilometre ( 0. 39 sq mi ), while the mammoth steppe with an estimated 10, 500 kilograms ( 10. 3 long tons ; 11. 6 short tons ) per km2 falls within a similar range. the herbivore biomass of britain during the eemian interglacial has been estimated as more than 15, 000 kilograms ( 15 long tons ; 17 short tons ) per km2, which is equivalent to more than 2. 5 fallow deer per ha. hence, the ecologist christopher sandom and others have suggested that the comparatively high forest cover of the pre - neolithic european holocene may be a consequence of megaherbivore extinctions during the quaternary extinction event, as compared to the last interglacial in europe with a pristine megafauna, the eemian, the early stages of the
|
Wood-pasture hypothesis
|
wikipedia
|
holocene may be a consequence of megaherbivore extinctions during the quaternary extinction event, as compared to the last interglacial in europe with a pristine megafauna, the eemian, the early stages of the holocene appear to have been much more forested. according to the authors, this is unlikely to be the result of the latter's only slightly cooler climate as compared to the eemian. however, this is also subject to debate. = = = = background : grazers and browsers = = = = the impact herbivores have on the landscape level depends on their way of feeding. namely, browsers like roe deer, elk and the black rhino focus on woody vegetation, while the diet of grazers like horse, cattle and the white rhino is dominated by grasses and forbs. intermediate feeders, like the wisent and the red deer, fall in between. generally, grazers tend to be more social, less selective in their food choices and forage more intensively. therefore, their impact on vegetation composition tends to be higher, as well as their ability to maintain open spaces. since the extinction of the aurochs in 1627 and the wild horse around 1900, none of the remaining large wild herbivores in europe is an obligate grazer. similarly, domesticated descendants of aurochs and wild horse, cattle and horse, are now largely kept in stables, factory farms and close to settlements, making them effectively extinct in the landscape. what remains are browsers and mixed feeders – roe deer, red deer, elk, wild boar, wisent and beaver, often in low densities. backbreeding - projects, such as the german taurus project and the dutch tauros programme are addressing this issue by breeding domestic cattle that can be released into the landscape as hardy and sufficiently similar proxies to act as ecological replacements for the aurochs. similarly, primitive horse breeds such as the konik, exmoor pony and the sorraia are being used as proxies for the tarpan. = = frans vera = = vera argued that the dominating landscape - type of the early to mid - holocene was not closed forest, but a semi - open, park - like one. this semi - open landscape, he proposed, was created and maintained by large herbivores. during the holocene, these herbivores included aurochs, european bison, red deer and tarpan. up to the quater
|
Wood-pasture hypothesis
|
wikipedia
|
one. this semi - open landscape, he proposed, was created and maintained by large herbivores. during the holocene, these herbivores included aurochs, european bison, red deer and tarpan. up to the quaternary extinctions, many other megafaunal mammals like the straight - tusked elephant or merck's rhinoceros existed in europe as well, that probably kept the forests open during warm interglacial periods like the eemian interglacial. vera also postulated that lowland forest did not emerge on a large scale before the onset of the neolithic period and subsequent local extinctions of herbivores, which in turn allowed forests to thrive more unhindered. indeed, investigations point to at least locally open circumstances, for example in floodplains, on infertile soils, chalklands and in submediterranean and continental areas, but maintain that forest largely dominated. in his book vera also discussed the decline of ancient oak - hickory - forest communities in eastern north america. many forests that stem from pre - columbian times ( old - growth forests ) feature light - demanding oaks and hickories prominently. however, these do not readily regenerate in modern forests ; a phenomenon commonly referred to as oak regeneration failure. instead, shade - tolerant species such as red maple and american beech dominate increasingly. while the cause is still poorly understood, a lack of natural fire is commonly presumed to play a role. vera instead suggested that the grazing and browsing of wild herbivores, most importantly american bison, created the conditions oaks and hickories need for successful regeneration to happen, and explained the modern lack of regeneration of these species in forests with the mass - slaughter of bisons committed by european settlers. paleoecological evidence drawn from fossil coleoptera deposits has also shown that, albeit rare, beetle species associated with grasslands and other open landscapes were present throughout the holocene of western europe, which points to open habitats being present, but restricted. however, paleoecological data from previous interglacials when the larger megafauna was still present indicate widespread warm temperate savannah. this could mean that elephants and rhinos were more effective creators of open landscapes than the herbivores left after the quaternary extinction event. on the other hand, traditional animal husbandry may have mitigated the effects of possibly human - induced megafaunal die - off, allowing the survival of species of the open landscape previously created and maintained by megafa
|
Wood-pasture hypothesis
|
wikipedia
|
##aternary extinction event. on the other hand, traditional animal husbandry may have mitigated the effects of possibly human - induced megafaunal die - off, allowing the survival of species of the open landscape previously created and maintained by megafauna. frans vera was not the first to question the high - forest paradigm. botanist francis rose had expressed doubts already in the 1960s, knowing about british plant and lichen species and their light requirements. the relationship between large grazers and landscape openness, and the significance of the quaternary extinctions of megafauna in this regard, had also been recognized prior to vera. in 1992, for example, the archaeologist wilhelm schule theorized that the genesis of closed forest in temperate europe was the result of prehistoric man - made megafauna extinctions. landscape ecologist oliver rackham, in a 1998 article entitled " savanna in europe ", envisaged a kind of savanna as the original predominant landscape type of northwestern europe. vera, however, was the first to develop a comprehensive theorem to explain why forest did not dominate even in the holocene, and to thus propose a real alternative to the high - forest theory. in some of its aspects, the wood - pasture hypothesis bears similarity to gradmann's steppe theory which was proposed by robert gradmann but challenged and refuted by scholars such as reinhold tuxen and karl bertsch. = = = main arguments = = = = = = = oak and hazel = = = = vera relies on several lines of argument based on experiments, ecology, evolutionary ecology, palynology, history and etymology. one of his main arguments is of an ecological nature ; the widespread lack of successful regeneration of light - demanding tree species in modern forests. especially the lack of regeneration of pedunculate oak, sessile oak ( together hereafter addressed as " oak " ) and common hazel in europe. he contrasts this reality with european pollen deposits from previous ages, where oak and hazel often form a dominant amount of pollen, making a dominance of these species in previous ages conceivable. especially in regard to hazel, sufficient flowering is only achieved when enough sunlight is available, i. e. the plant grows outside of a closed canopy. he argues that the only explanation for the great abundance of oak and hazel pollen in previous ages is that the primeval landscape was open, and this contrast forms the principal theorem of his hypothesis. it has also been suggested that oak requires disturbances for successful establishment, disturbance large
|
Wood-pasture hypothesis
|
wikipedia
|
the only explanation for the great abundance of oak and hazel pollen in previous ages is that the primeval landscape was open, and this contrast forms the principal theorem of his hypothesis. it has also been suggested that oak requires disturbances for successful establishment, disturbance large herbivores may provide. however, pollen records from islands that lacked many of the large grazers and browsers that, according to vera, were essential for the maintenance of landscapes with an open character in temperate europe show almost no differences in comparison to mainland europe. more specifically, pollen records from holocene ireland, which during the early holocene was apparently, owing to a lack of fossils, devoid of any big herbivores except for abundant wild boar and rare red deer, show almost equally high percentages of oak and hazel pollen. thus it could be concluded that large herbivores were not a required factor for the degree of openness in a landscape, and that the abundance of pollen from species that are unable to reproduce and regenerate sufficiently under a closed canopy, such as hazel and oak, can only be explained by other factors like windthrow and natural fires. vera's notion may be supported by observations over the course of 20 years forest regeneration in forest gaps created by windthrow, which showed that hornbeam and beech dominate the emerging stands and largely displace oaks on fertile, nutrient - rich soil. however, after the last ice age oak returned earlier to central and western europe than beech or hornbeam, which may have contributed to its commonness, at least during the early holocene. still, other shade - tolerant tree species like lime and elm were equally fast returnees, and do not seem to have limited oak abundance. on the other hand, substantial natural oak - regeneration commonly takes place outside of forests in fringe and transitional habitats, suggesting that a focus on regeneration in forests in an attempt to explain oak regeneration failure may be insufficient in regard to the ecology of central european oak species. rather, an underestimated reason for widespread failure of oak regeneration may be found in the direct effects of land - use changes since the early modern period, which has led to a more simplistic, homogeneous landscape, as spontaneous regeneration of both oak and hazel does frequently occur in margins, thickets, and low - grazing - intensity or abandoned pasture / arable land. overall, oak is an adept coloniser of open areas and especially of transitional zones between vegetation zones such as forest and open grassland. looking for regeneration within forests may therefore be futile from the outset
|
Wood-pasture hypothesis
|
wikipedia
|
- grazing - intensity or abandoned pasture / arable land. overall, oak is an adept coloniser of open areas and especially of transitional zones between vegetation zones such as forest and open grassland. looking for regeneration within forests may therefore be futile from the outset. there is, therefore, no general " failure " in oak regeneration, but only a failure of oak regeneration within closed forests. this, however, may be expectable and natural given oak's colonising nature. furthermore, new species of oak mildew ( erysiphe alphitoides ) observed on european oaks for the first time at the beginning of the 20th century have been cited as a possible reason for the modern lack of oak regeneration in forests, since they affect the shade tolerance, particularly of young pedunculate and sessile oaks. although the origin of these new oak pathogens remains obscure, it seems to be an invasive species from the tropics, possibly conspecific with a pathogen found on mangos. = = = = ecological anachronisms = = = = vera prominently argued that since other light - demanding and often thorny woody species exist in europe — species such as common hawthorn, midland hawthorn, blackthorn, crataegus rhipidophylla, wild pear and crab apple — their ecology can only be explained under the influence of large herbivores, and that in the absence of these they represent an anachronism. = = = = shortcomings of pollen analysis = = = = vera further contested that pollen diagrams can adequately display past species occurrences since, inherently, pollen deposits tend to overrepresent species that are wind - pollinated and notoriously underrepresent species that are pollinated by insects. furthermore, he proposed that an absence of grass pollen in pollen diagrams can be explained by high grazing pressure, which would prevent the grasses from flowering. under such conditions, he claimed, open environments with only scattered mature trees may appear as closed forests in pollen deposits. he consequently proposed that the conspicuous scarcity of grass pollen in pollen deposits dating from the pre - neolithic holocene might not necessarily speak against the existence of open environments dominated by grasses. however, it is generally considered that over 60 % tree pollen in pollen deposits indicates a closed forest canopy, which is true for the vast majority of european early to mid - holocene deposits. sites with less than 50 % arboreal pollen, on the other hand, are consistently associated with human activities. = = = = circular reasoning = = =
|
Wood-pasture hypothesis
|
wikipedia
|
, which is true for the vast majority of european early to mid - holocene deposits. sites with less than 50 % arboreal pollen, on the other hand, are consistently associated with human activities. = = = = circular reasoning = = = = vera stressed that the prevailing high - forest theory was born out of observations of spontaneous regeneration in the absence of grazing animals. he argued that the presupposition that these animals do not exert a significant influence on natural regeneration, and thus on the vegetation structure as a whole, has been made without comparative confirmation, and is therefore a circular argument. indeed, modern forestry and forest theory arose largely in the modern era and went hand in hand with the ongoing inclosure of common land throughout europe. a consequence thereof was in many cases a ban of livestock from the forests, which had previously largely been open woodland pastures, often dominated by oaks. these were multifunctional and used for a range of purposes, from pannage and livestock grazing to the harvest of tree hay, coppice, timber and oak galls for the manufacture of ink, as well as for the production of charcoal, crops and fruit. this former usage of forests is often still revealed by a big age gap between tree generations, particularly if the oldest trees are mainly oaks, and many central european forest reserves originated as common wood - pastures. = = = = shifted baselines = = = = in nature conservation, a shifted baseline is a baseline for conservation targets and desired population sizes that is based on non - pristine conditions. in this sense, the term was coined by marine biologist daniel pauly when he observed that some fisheries scientists used the population sizes of fish at the beginning of their own careers to assess a desired baseline, notwithstanding whether the fishing stocks they used as baselines had already been diminished by human exploitation. he noticed, that the estimations these scientists took for reference markedly differed from historical accounts. consequently, he concluded that over generations the perception of what is considered to be normal would change, and so may what is considered a depleted population. pauly called this the shifting baseline syndrome. in line with this, it may be argued that the prevalence of closed - canopy forest as the prevailing conservation narrative in europe similarly arises from multiple shifted baselines : while it is plausble that lions ( panthera speleae, p. leo leo ), leopards ( panthera pardus spelaea, p. pardus tulliana ), hyenas ( hyaena hyaena prisca,
|
Wood-pasture hypothesis
|
wikipedia
|
##ausble that lions ( panthera speleae, p. leo leo ), leopards ( panthera pardus spelaea, p. pardus tulliana ), hyenas ( hyaena hyaena prisca, crocuta crocuta spelaea ), dholes ( cuon alpinus europeus ), wild ass ( equus hydruntinus, e. hemionus kulan ) and moon bears ( ursus thibetanus mediterraneus, u. t. permjak ), among other victims of european quaternary and holocene extinctions, would still be native to europe, had they not been evicted by humans, none of these species are listed as such in the eu's habitats directive's annexes. likewise, globally extinct megafauna such as straight - tusked elephants and rhinos would likely be native to europe without human interference, and they would in all probability have a strong positive impact on biodiversity and ecosystem functions. it is therefore very likely that the megafauna extinctions of the late pleistocene and early holocene had profound implications for european and worldwide ecosystems, especially given the paramount importance comparable animals have for modern ecosystems. vera pointed out that words like wold and forest used to have different connotations than they do today. while today, a forest is a dense and reasonably large tract of trees, the medieval latin forestis, from which it derives, assigned open stands of trees, and was a wild and uncultivated land home also to aurochs and wild horses. according to historical sources, these forestis included hawthorn, blackthorn, wild cherry, wild apple and wild pear, as well as oaks, all of which are light - demanding species that cannot regenerate successfully in closed - canopy forest. from this vera concluded that original wildwoods still existed in europe during the medieval period. thus, when scholars of the 19th and 20th century assumed that grazing animals had destroyed the original european closed - canopy wildwoods, they were misinterpreting these terms. instead, these forests, he found, had been destroyed following the industrial revolution and the population growth it caused, which in turn caused overexploitation. he further argued that from this initial misinterpretation gave rise to another misinterpretation : that forest regeneration would naturally take place inside the forest. thus, scholars of the 19th and 20th century such as elias landolt ( forester )
|
Wood-pasture hypothesis
|
wikipedia
|
further argued that from this initial misinterpretation gave rise to another misinterpretation : that forest regeneration would naturally take place inside the forest. thus, scholars of the 19th and 20th century such as elias landolt ( forester ) interpreted medieval grazing regulations to allow tree regeneration in coppiced mantle and fringe vegetation as intended to allow regeneration in a forest. in their time, solid firewood was preferred to the medieval coppice bundles, e. g. faggots. however, the production of solid firewood required the felling of trees at an age when they could no longer produce suckers, an ability that trees commonly lose with progressing age. this then led to a different management system : the replacement by saplings planted or naturally regenerated via, for example, shelterwood cuttings. initially, these trees regenerated inside the forests were differentiated from wild growth outside the forests. in german, the former were referred to as natural regeneration ( naturverjungung ) while the latter had a different name : holzwildwuchse. thus, natural regeneration was not synonymous with the natural regeneration of trees in a natural situation. it was not until the 19th and 20th centuries that this distinction was abandoned in german. however, in the absence of thorny nurse bushes, which disappeared due to the shadow under the trees, the planted trees then had to be protected manually. the " natural regeneration " was therefore still depended on work like ploughing, removal of browsing pressure and the suppression of weeds, making it not " natural " in the conventional sense. instead, according to vera, the original meaning of the word " natural " in this context was that a seed fell from a tree and then grew by itself, as opposed to being planted. this shift in expectation of where regeneration of trees was to be expected, from thorny fringes of groves in wood - pastures to the interior of closed tree stands, then led to the notion that herbivores were detrimental to forest regeneration, and necessitated fenced - out areas, tree shelters and population control via hunting. considered " alien " to the landscape, akin to invasive species, cattle and horses were now also removed from the forests, as it happened in former wood - pastures like białowieza, because they were seen as harmful to the creation of a new old - growth forest. at the same time, the introduction of the potato made pannage, the fattening of pigs on acorns, obsolete, and grass
|
Wood-pasture hypothesis
|
wikipedia
|
##owieza, because they were seen as harmful to the creation of a new old - growth forest. at the same time, the introduction of the potato made pannage, the fattening of pigs on acorns, obsolete, and grass species specifically bred for a high yield superseded the traditional pasturing, mostly of cattle, in wood - pastures. together, these mechanisms created the spatial separation between livestock rearing and forestry, grassland and forest enshrined into modern law and practice. finally, the biodiversity losses associated with the conversion of open grassland, mantle and fringe vegetation and open - grown trees into closed - canopy forests were legitimised by the assumption that the forest was the only natural ecosystem, and hence species losses were casualties of a natural cause. however, a strong argument that may put vera's etymological evidence into perspective altogether is that the composition of medieval woodlands may not be relevant to their naturalness. since by the medieval period agricultural traditions had already been ubiquitous in most of europe for millennia, it may be unrealistic to assume that what people of the time perceived and labelled as wilderness may indeed have been one. instead, it is doubtful that pristine conditions had survived in the central - and western european lowlands, vera's area of study, at any rate up to this point. = = = succession in grazed ecosystems = = = there are several ecological processes at work in herbivore grazing systems, namely associational resistance, shifting mosaics, cyclic succession, and gap dynamics. these processes would collectively transform the surrounding landscape, as per vera's model. = = = = associational resistance = = = = the term associational resistance describes facilitating relationships between plants that grow close to each other, against both biotic and abiotic stresses like browsing, drought, or salinity. in relation to grazed ecosystems, it can allow for the recruitment of trees and other palatable woody species, via thorny nurse bushes, in these environments. it has been proposed and demonstrated that associational resistance can be a key process in grazed environments, ensuring natural succession. in temperate europe, succession on pastures commonly starts with so called " islets " ( " geilstellen " ), patches of dung which are avoided by the herbivores for an amount of time after deposition, sufficient to allow the establishment of relatively unpalatable species such as rushes, nettles and hummocks of tall grasses like tussock grass. these swards, in turn, provide protection for thorny shrubs
|
Wood-pasture hypothesis
|
wikipedia
|
amount of time after deposition, sufficient to allow the establishment of relatively unpalatable species such as rushes, nettles and hummocks of tall grasses like tussock grass. these swards, in turn, provide protection for thorny shrubs such as blackthorn, roses, hawthorn, juniper, bramble, holly and barberry during their early years, when they do not yet have protective thorns and are therefore vulnerable. once the thorny saplings are fully established, they grow bigger over time and subsequently allow other, less resilient species to establish in their thorn protection, forming mantle and fringe vegetation together with species such as guelder rose, wild privet and dogwood. other species such as mazzard, checker tree, rowan and whitebeam, which are distributed by fruit - eating birds through their faeces, would also frequently be placed within these shrubs, through resting birds leaving their droppings. on the other hand, nut - bearing species such as hazel, beech, chestnut, pedunculate and sessile oak would become " planted " somewhat deliberately in the vicinity of those shrubs by rodents such as red squirrel and wood mouse, the nuthatch and corvids such as crows, magpies, ravens and especially jays, which store them for winter supply. in europe, the eurasian jay represents the most important seed disperser of oak, burying acorns individually or in small groups. eurasian jays not only bury acorns in depths favoured by oak saplings, but seemingly also prefer spots with sufficient light availability, i. e. open grassland and transitions between grassland and shrubland, seeking for vertical structures such as shrubs in the near surroundings. since oak is relatively light - demanding while not having the ability to regenerate on its own under high browsing pressure, these habits of the jay presumably benefit oak, since they provide the conditions oak requires for optimal growth and health. on a similar note, the nuthatch seems to assume a prominent role for hazel dispersal. in addition, species such as wild pear, crab apple and whitty pear, which bear relatively large fruit, would find propagators in herbivores such as roe deer, red deer and cattle, or in omnivores such as the wild boar, red fox, the european badger and the raccoon, while wind - dispersed species such as maple, elm, lime or ash would land within these shrubs by chance. thorny bushes play an important role in tree regeneration in the european lowlands,
|
Wood-pasture hypothesis
|
wikipedia
|
, red fox, the european badger and the raccoon, while wind - dispersed species such as maple, elm, lime or ash would land within these shrubs by chance. thorny bushes play an important role in tree regeneration in the european lowlands, and evidence is emerging that similar processes can also ensure the survival of browsing - sensitive species like rowan in browsed boreal forests. = = = = shifting mosaics and cyclic succession = = = = a natural pasture ecosystem would therefore undergo various stages of succession, starting with unpalatable perennial plants, which provide shelter for thorny woody plants. second, these would start to form thickets and enable the establishment of larger, palatable shrubs and trees respectively. over time these would then outshadow the unpalatable but light - demanding thickets and emerge as big solitary trees, in the case of single - standing shrubs like hawthorn, or groups of trees in the case of expanding blackthorn shrubs. because of the herbivore disturbance ( browsing, trampling, wallowing, dust bathing ), not even shade - tolerant tree saplings would be able to grow under the established trees. therefore, once the established trees would start to decay, either due to old age or other factors like pathogens, illness, lightning strike or windbreak, this would leave open, bare land behind, for grasses and unpalatable species to colonise, closing the cycle. on a large scale, different successional stages would thus contribute an ecosystem where open grassland, scrubland, emerging tree growth, groves of trees and solitary trees exist next to each other, and the alternation between these various successional stages would create dynamic shifting mosaics of vegetation. this in turn stimulates high biodiversity. consequently, vera's counter - proposal to the linear succession and watt's gap - phase model of closed - canopy forest, to which it has been compared is a model of successional cycles known as the shifting mosaics model. in effect however, not all areas would have necessarily been subject to this permanent change. since grazing animals generally prefer to spend time in grasslands rather than in closed stands of trees, it would practically be possible for three different landscape types to coexist over longer periods in the same spots : permanently open areas, permanently closed groves and areas subject to constant shifting mosaics. = = the prehistoric baseline = = = = = the eemian landscape = = = although vera himself limited his argument to the holocene and the fauna present into historical times, research better supports his claims
|
Wood-pasture hypothesis
|
wikipedia
|
and areas subject to constant shifting mosaics. = = the prehistoric baseline = = = = = the eemian landscape = = = although vera himself limited his argument to the holocene and the fauna present into historical times, research better supports his claims in regard to earlier interglacials. modern humans have likely exerted a strong influence in europe since their first appearance here during the weichselian glaciation, which has led some researchers to criticize vera's choice of the early to mid holocene as his benchmark for pristine nature. instead, they argue that pristine nature only existed in europe before the entering of homo sapiens. they argue that the best model for what a truly natural landscape during a warm period in europe would look like is the eemian interglacial, which was the last warm period before the current holocene, approximately 130, 000 to 115, 000 years ago, and the last warm period before homo sapiens. while archaic humans existed in the form of neanderthals, their influence was probably only localised, due to their low population density. during this warm period, paleoecological data indeed suggest that semi - open landscapes, as postulated by vera, were widespread and common, most likely maintained by large herbivores. next to these semi - open landscapes, however, the researchers also found evidence for closed - canopy forest. overall, the eemian landscape appears to have been very dynamic and probably consisted of varying degrees of openness, including open grasslands, wood pastures, light - open woodland and closed - canopy forest, with high local heterogeneity. = = = the european megafauna = = = the eemian interglacial was one of many warm interglacials during the quaternary, of which the holocene ( or flandrian interglacial ) is the most recent. these alternating glacial and interglacial periods, triggered by the milankovitch cycles, in turn had a profound influence on life. in middle to late pleistocene europe, the result of this cycling was that two very different faunal and floral assemblages took turns in central europe. the warm - temperate palaeoloxodon - faunal assemblage, consisting of the straight - tusked elephant, merck's rhinoceros, the narrow - nosed rhinoceros, hippopotamuses, european water buffalo, aurochs, and several species of deer, among others ( including most of
|
Wood-pasture hypothesis
|
wikipedia
|
the straight - tusked elephant, merck's rhinoceros, the narrow - nosed rhinoceros, hippopotamuses, european water buffalo, aurochs, and several species of deer, among others ( including most of today's european fauna ), had its core area in the mediterranean. the warm - temperate assemblage periodically expanded from there into the rest of europe during warm interglacials, and receded during glacial periods into refugia in the mediterranean. meanwhile, the cold - temperate faunal assemblage of the mammoth steppe, consisting of the woolly mammoth, woolly rhinoceros, reindeer, saiga, muskox, steppe bison, arctic fox and lemming among others, was spread across vast areas of northern eurasia as well as north america, and during periodic cold glacials advanced deep into europe. other animals, such as horses, steppe lions, the scimitar cat, the ice age spotted hyena and wolves were part of both faunal assemblages. both groups of animals spread and retreated cyclically, depending on whether the climate favoured one or the other, but essentially remained intact in refugia that continued to provide the conditions they preferred. = = = the quaternary extinction event = = = prior to the last glacial maximum however, elements of the warm - temperate palaeoloxodon - fauna ( hippopotamus, straight - tusked elephant, the two stephanorhinus species and neanderthals, for example ) as well as the steppe species elasmotherium sibricum started to disappear and eventually went extinct. at the onset of the last glacial maximum, populations of ice age spotted hyena and the cave bear complex ( ursus spelaea, ursus ingressus ) seem to have collapsed large - scale, and became extinct next. after the last glacial maximum and towards the holocene, extinctions continued, with many emblematic " ice age species " of the mammoth steppe and adjacent habitats, such as the woolly rhinoceros, the steppe lion, the giant deer and the woolly mammoth falling victim, although small regional populations of woolly mammoth and steppe bison held out well into the holocene, and the giant deer was present in the southern ural region into historical times. these extinctions have been variously credited to human impact, climate change, or a combination of the two. these extinctions were not limited to europe or the palearctic
|
Wood-pasture hypothesis
|
wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.