source_dataset
stringclasses
1 value
question
stringlengths
6
1.87k
choices
stringlengths
20
1.02k
answer
stringclasses
4 values
rationale
float64
documents
stringlengths
1.01k
5.9k
epfl-collab
Graph coloring consist of coloring all vertices \ldots
['\\ldots with a unique color.', '\\ldots with a random color.', '\\ldots with a maximum number of colors.', '\\ldots with a different color when they are linked with an edge.']
D
null
Document 1::: Graph color In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. Document 2::: De Bruijn–Erdős theorem (graph theory) A graph coloring associates each vertex with a color drawn from a set of colors, in such a way that every edge has two different colors at its endpoints. A frequent goal in graph coloring is to minimize the total number of colors that are used; the chromatic number of a graph is this minimum number of colors. The four-color theorem states that every finite graph that can be drawn without crossings in the Euclidean plane needs at most four colors; however, some graphs with more complicated connectivity require more than four colors. Document 3::: Graph color Total coloring is a type of coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the fewest colors needed in any total coloring of G. Document 4::: Coloring algorithm When used without any qualification, a coloring of a graph almost always refers to a proper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop (i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. The terminology of using colors for vertex labels goes back to map coloring. Document 5::: Graph color The convention of using colors originates from coloring the countries of a map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs.
epfl-collab
What adversarial model does not make sense for a message authentication code (MAC)?
['key recovery.', 'existential forgery.', 'decryption.', 'universal forgery.']
C
null
Document 1::: Adversarial machine learning Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.To understand, note that most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption. Some of the most common attacks in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction. Document 2::: Adversarial machine learning Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.To understand, note that most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption. Some of the most common attacks in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction. Document 3::: PMAC (cryptography) PMAC, which stands for parallelizable MAC, is a message authentication code algorithm. It was created by Phillip Rogaway. PMAC is a method of taking a block cipher and creating an efficient message authentication code that is reducible in security to the underlying block cipher. PMAC is similar in functionality to the OMAC algorithm. Document 4::: Adversary (cryptography) Eve, Mallory, Oscar and Trudy are all adversarial characters widely used in both types of texts. This notion of an adversary helps both intuitive and formal reasoning about cryptosystems by casting security analysis of cryptosystems as a 'game' between the users and a centrally co-ordinated enemy. The notion of security of a cryptosystem is meaningful only with respect to particular attacks (usually presumed to be carried out by particular sorts of adversaries). Document 5::: Standard model (cryptography) Security proofs are notoriously difficult to achieve in the standard model, so in many proofs, cryptographic primitives are replaced by idealized versions. The most common example of this technique, known as the random oracle model, involves replacing a cryptographic hash function with a genuinely random function. Another example is the generic group model, where the adversary is given access to a randomly chosen encoding of a group, instead of the finite field or elliptic curve groups used in practice.
epfl-collab
Which one of these ciphers does achieve perfect secrecy?
['DES', 'Vernam', 'RSA', 'FOX']
B
null
Document 1::: Block cipher Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Document 2::: Cipher Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). Document 3::: Square (cipher) In cryptography, Square (sometimes written SQUARE) is a block cipher invented by Joan Daemen and Vincent Rijmen. The design, published in 1997, is a forerunner to Rijndael, which has been adopted as the Advanced Encryption Standard. Square was introduced together with a new form of cryptanalysis discovered by Lars Knudsen, called the "Square attack". The structure of Square is a substitution–permutation network with eight rounds, operating on 128-bit blocks and using a 128-bit key. Square is not patented. Document 4::: CS-Cipher In cryptography, CS-Cipher (for Chiffrement Symétrique) is a block cipher invented by Jacques Stern and Serge Vaudenay in 1998. It was submitted to the NESSIE project, but was not selected. The algorithm uses a key length between 0 and 128 bits (length must be a multiple of 8 bits). By default, the cipher uses 128 bits. It operates on blocks of 64 bits using an 8-round Feistel network and is optimized for 8-bit processors. The round function is based on the fast Fourier transform and uses the binary expansion of e as a source of "nothing up my sleeve numbers". Document 5::: Four-square cipher The four-square cipher is a manual symmetric encryption technique. It was invented by the French cryptographer Felix Delastelle. The technique encrypts pairs of letters (digraphs), and thus falls into a category of ciphers known as polygraphic substitution ciphers. This adds significant strength to the encryption when compared with monographic substitution ciphers which operate on single characters. The use of digraphs makes the four-square technique less susceptible to frequency analysis attacks, as the analysis must be done on 676 possible digraphs rather than just 26 for monographic substitution. The frequency analysis of digraphs is possible, but considerably more difficult - and it generally requires a much larger ciphertext in order to be useful.
epfl-collab
Which of the following problems has not been shown equivalent to the others?
['The RSA Order Problem.', 'The RSA Factorization Problem.', 'The RSA Key Recovery Problem.', 'The RSA Decryption Problem.']
D
null
Document 1::: Nonelementary problem In computational complexity theory, a nonelementary problem is a problem that is not a member of the class ELEMENTARY. As a class it is sometimes denoted as NONELEMENTARY. Examples of nonelementary problems that are nevertheless decidable include: the problem of regular expression equivalence with complementation the decision problem for monadic second-order logic over trees (see S2S) the decision problem for term algebras satisfiability of W. V. O. Quine's fluted fragment of first-order logic deciding β-convertibility of two closed terms in typed lambda calculus reachability in vector addition systems; it is Ackermann-complete. == References == Document 2::: Open problem in mathematics Many mathematical problems have been stated but not yet solved. These problems come from many areas of mathematics, such as theoretical physics, computer science, algebra, analysis, combinatorics, algebraic, differential, discrete and Euclidean geometries, graph theory, group theory, model theory, number theory, set theory, Ramsey theory, dynamical systems, and partial differential equations. Some problems belong to more than one discipline and are studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and some lists of unsolved problems, such as the Millennium Prize Problems, receive considerable attention. This list is a composite of notable unsolved problems mentioned in previously published lists, including but not limited to lists considered authoritative. Although this list may never be comprehensive, the problems listed here vary widely in both difficulty and importance. Document 3::: Hilbert problems Of the cleanly formulated Hilbert problems, problems 3, 7, 10, 14, 17, 18, 19, and 20 have resolutions that are accepted by consensus of the mathematical community. On the other hand, problems 1, 2, 5, 6, 9, 11, 12, 15, 21, and 22 have solutions that have partial acceptance, but there exists some controversy as to whether they resolve the problems. That leaves 8 (the Riemann hypothesis), 13 and 16 unresolved, and 4 and 23 as too vague to ever be described as solved. The withdrawn 24 would also be in this class. Number 6 is considered a problem in physics rather than in mathematics. Document 4::: Kissing number problem Proving a solution to the three-dimensional case, despite being easy to conceptualise and model in the physical world, eluded mathematicians until the mid-20th century. Solutions in higher dimensions are considerably more challenging, and only a handful of cases have been solved exactly. For others investigations have determined upper and lower bounds, but not exact solutions. Document 5::: List of PPAD-complete problems This is a list of PPAD-complete problems.
epfl-collab
A proof system is perfect-black-box zero-knowledge if \dots
['for any PPT verifier $V$, there exists a PPT simulator $S$, such that $S$ produces an output which is hard to distinguish from the view of the verifier.', 'for any PPT simulator $S$ and for any PPT verifier $V$, $S^{V}$ produces an output which has the same distribution as the view of the verifier.', 'there exists a PPT verifier $V$ such that for any PPT simulator $S$, $S$ produces an output which has the same distribution as the view of the verifier.', 'there exists a PPT simulator $S$ such that for any PPT verifier $V$, $S^{V}$ produces an output which has the same distribution as the view of the verifier.']
D
null
Document 1::: Zero-knowledge proofs A zero-knowledge proof of some statement must satisfy three properties: Completeness: if the statement is true, an honest verifier (that is, one following the protocol properly) will be convinced of this fact by an honest prover. Soundness: if the statement is false, no cheating prover can convince an honest verifier that it is true, except with some small probability. Zero-knowledge: if the statement is true, no verifier learns anything other than the fact that the statement is true. In other words, just knowing the statement (not the secret) is sufficient to imagine a scenario showing that the prover knows the secret. Document 2::: Zero-knowledge proofs A formal definition of zero-knowledge has to use some computational model, the most common one being that of a Turing machine. Let P {\displaystyle P} , V {\displaystyle V} , and S {\displaystyle S} be Turing machines. An interactive proof system with ( P , V ) {\displaystyle (P,V)} for a language L {\displaystyle L} is zero-knowledge if for any probabilistic polynomial time (PPT) verifier V ^ {\displaystyle {\hat {V}}} there exists a PPT simulator S {\displaystyle S} such that ∀ x ∈ L , z ∈ { 0 , 1 } ∗ , View V ^ ⁡ = S ( x , z ) {\displaystyle \forall x\in L,z\in \{0,1\}^{*},\operatorname {View} _{\hat {V}}\left=S(x,z)} where View V ^ ⁡ {\displaystyle \operatorname {View} _{\hat {V}}\left} is a record of the interactions between P ( x ) {\displaystyle P(x)} and V ^ ( x , z ) {\displaystyle {\hat {V}}(x,z)} . Document 3::: Zero-knowledge proof In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, while avoiding conveying to the verifier any information beyond the mere fact of the statement's truth. The intuition underlying zero-knowledge proofs is that it is trivial to prove the possession of certain information by simply revealing it; the challenge is to prove this possession without revealing the information, or any aspect of it whatsoever.In light of the fact that one should be able to generate a proof of some statement only when in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to third parties. In the plain model, nontrivial zero-knowledge proofs (i.e., those for languages outside of BPP) demand interaction between the prover and the verifier. This interaction usually entails the selection of one or more random challenges by the verifier; the random origin of these challenges, together with the prover's successful responses to them notwithstanding, jointly convince the verifier that the prover does possess the claimed knowledge. Document 4::: Zero-knowledge proofs This is formalized by showing that every verifier has some simulator that, given only the statement to be proved (and no access to the prover), can produce a transcript that "looks like" an interaction between an honest prover and the verifier in question.The first two of these are properties of more general interactive proof systems. The third is what makes the proof zero-knowledge.Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, the soundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, zero-knowledge proofs are probabilistic "proofs" rather than deterministic proofs. Document 5::: Zero-knowledge proofs The prover P {\displaystyle P} is modeled as having unlimited computation power (in practice, P {\displaystyle P} usually is a probabilistic Turing machine). Intuitively, the definition states that an interactive proof system ( P , V ) {\displaystyle (P,V)} is zero-knowledge if for any verifier V ^ {\displaystyle {\hat {V}}} there exists an efficient simulator S {\displaystyle S} (depending on V ^ {\displaystyle {\hat {V}}} ) that can reproduce the conversation between P {\displaystyle P} and V ^ {\displaystyle {\hat {V}}} on any given input. The auxiliary string z {\displaystyle z} in the definition plays the role of "prior knowledge" (including the random coins of V ^ {\displaystyle {\hat {V}}} ). The definition implies that V ^ {\displaystyle {\hat {V}}} cannot use any prior knowledge string z {\displaystyle z} to mine information out of its conversation with P {\displaystyle P} , because if S {\displaystyle S} is also given this prior knowledge then it can reproduce the conversation between V ^ {\displaystyle {\hat {V}}} and P {\displaystyle P} just as before.The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifier V ^ {\displaystyle {\hat {V}}} and the simulator are only computationally indistinguishable, given the auxiliary string.
epfl-collab
Suppose that you can prove the security of your symmetric encryption scheme against the following attacks. In which case is your scheme going to be the \textbf{most} secure?
['Key recovery under known plaintext attack.', 'Decryption under known plaintext attack.', 'Key recovery under chosen ciphertext attack.', 'Decryption under chosen ciphertext attack.']
D
null
Document 1::: Known-key distinguishing attack In cryptography, a known-key distinguishing attack is an attack model against symmetric ciphers, whereby an attacker who knows the key can find a structural property in cipher, where the transformation from plaintext to ciphertext is not random. There is no common formal definition for what such a transformation may be. The chosen-key distinguishing attack is strongly related, where the attacker can choose a key to introduce such transformations.These attacks do not directly compromise the confidentiality of ciphers, because in a classical scenario, the key is unknown to the attacker. Document 2::: Indistinguishability (cryptography) A cryptosystem is considered secure in terms of indistinguishability if no adversary, given an encryption of a message randomly chosen from a two-element message space determined by the adversary, can identify the message choice with probability significantly better than that of random guessing (1⁄2). If any adversary can succeed in distinguishing the chosen ciphertext with a probability significantly greater than 1⁄2, then this adversary is considered to have an "advantage" in distinguishing the ciphertext, and the scheme is not considered secure in terms of indistinguishability. This definition encompasses the notion that in a secure scheme, the adversary should learn no information from seeing a ciphertext. Therefore, the adversary should be able to do no better than if it guessed randomly. Document 3::: Distinguishing attack In cryptography, a distinguishing attack is any form of cryptanalysis on data encrypted by a cipher that allows an attacker to distinguish the encrypted data from random data. Modern symmetric-key ciphers are specifically designed to be immune to such an attack. In other words, modern encryption schemes are pseudorandom permutations and are designed to have ciphertext indistinguishability. If an algorithm is found that can distinguish the output from random faster than a brute force search, then that is considered a break of the cipher. A similar concept is the known-key distinguishing attack, whereby an attacker knows the key and can find a structural property in the cipher, where the transformation from plaintext to ciphertext is not random. Document 4::: Asymptotic security In cryptography, concrete security or exact security is a practice-oriented approach that aims to give more precise estimates of the computational complexities of adversarial tasks than polynomial equivalence would allow. It quantifies the security of a cryptosystem by bounding the probability of success for an adversary running for a fixed amount of time. Security proofs with precise analyses are referred to as concrete.Traditionally, provable security is asymptotic: it classifies the hardness of computational problems using polynomial-time reducibility. Secure schemes are defined to be those in which the advantage of any computationally bounded adversary is negligible. Document 5::: Block cipher Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators.
epfl-collab
For a $n$-bit block cipher with $k$-bit key, given a plaintext-ciphertext pair, a key exhaustive search has an average number of trials of \dots
['$2^n$', '$2^k$', '$\\frac{2^k+1}{2}$', '$\\frac{2^n+1}{2}$']
C
null
Document 1::: Completeness (cryptography) In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits. This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way. Document 2::: KCipher-2 KCipher-2 is a stream cipher jointly developed by Kyushu University and Japanese telecommunications company KDDI. It is standardized as ISO/IEC 18033–4, and is on the list of recommended ciphers published by the Japanese Cryptography Research and Evaluation Committees (CRYPTREC). It has a key length of 128 bits, and can encrypt and decrypt around seven to ten times faster than the Advanced Encryption Standard (AES) algorithm. Document 3::: M6 (cipher) Mod 257, information about the secret key itself is revealed. One known plaintext reduces the complexity of a brute force attack to about 235 trial encryptions; "a few dozen" known plaintexts lowers this number to about 231. Due to its simple key schedule, M6 is also vulnerable to a slide attack, which requires more known plaintext but less computation. == References == Document 4::: ICE (cipher) They described an attack on Thin-ICE which recovers the secret key using 223 chosen plaintexts with a 25% success probability. If 227 chosen plaintexts are used, the probability can be improved to 95%. For the standard version of ICE, an attack on 15 out of 16 rounds was found, requiring 256 work and at most 256 chosen plaintexts. Document 5::: 40-bit encryption 40-bit encryption refers to a (now broken) key size of forty bits, or five bytes, for symmetric encryption; this represents a relatively low level of security. A forty bit length corresponds to a total of 240 possible keys. Although this is a large number in human terms (about a trillion), it is possible to break this degree of encryption using a moderate amount of computing power in a brute-force attack, i.e., trying out each possible key in turn.
epfl-collab
Tick the \textbf{false} assertion. For a Vernam cipher...
['CRYPTO can be used as a key to encrypt the plaintext PLAIN', 'SERGE can be the ciphertext corresponding to the plaintext VAUDENAY', 'SUPERMAN can be the result of the encryption of the plaintext ENCRYPT', 'The key IAMAKEY can be used to encrypt any message of size up to 7 characters']
B
null
Document 1::: Padding oracle attack In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks. Document 2::: Square (cipher) In cryptography, Square (sometimes written SQUARE) is a block cipher invented by Joan Daemen and Vincent Rijmen. The design, published in 1997, is a forerunner to Rijndael, which has been adopted as the Advanced Encryption Standard. Square was introduced together with a new form of cryptanalysis discovered by Lars Knudsen, called the "Square attack". The structure of Square is a substitution–permutation network with eight rounds, operating on 128-bit blocks and using a 128-bit key. Square is not patented. Document 3::: Block cipher In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Document 4::: CryptMT In cryptography, CryptMT is a stream cipher algorithm which internally uses the Mersenne twister. It was developed by Makoto Matsumoto, Mariko Hagita, Takuji Nishimura and Mutsuo Saito and is patented. It has been submitted to the eSTREAM project of the eCRYPT network. In that submission to eSTREAM, the authors also included another cipher named Fubuki, which also uses the Mersenne twister. Document 5::: Polybius cipher The Polybius square, also known as the Polybius checkerboard, is a device invented by the ancient Greeks Cleoxenus and Democleitus, and made famous by the historian and scholar Polybius. The device is used for fractionating plaintext characters so that they can be represented by a smaller set of symbols, which is useful for telegraphy, steganography, and cryptography. The device was originally used for fire signalling, allowing for the coded transmission of any message, not just a finite amount of predetermined options as was the convention before.
epfl-collab
Assume we are in a group $G$ of order $n = p_1^{\alpha_1} p_2^{\alpha_2}$, where $p_1$ and $p_2$ are two distinct primes and $\alpha_1, \alpha_2 \in \mathbb{N}$. The complexity of applying the Pohlig-Hellman algorithm for computing the discrete logarithm in $G$ is \ldots (\emph{choose the most accurate answer}):
['$\\mathcal{O}( \\alpha_1 \\sqrt{p_1} + \\alpha_2 \\sqrt{p_2})$.', '$\\mathcal{O}(\\alpha_1 p_1^{\\alpha_1 -1} + \\alpha_2 p_2^{\\alpha_2 -1})$.', '$\\mathcal{O}( \\alpha_1 \\log{p_1} + \\alpha_2 \\log{p_2})$.', '$\\mathcal{O}(\\sqrt{p_1}^{\\alpha_1} + \\sqrt{p_2}^{\\alpha_2})$.']
A
null
Document 1::: Pohlig–Hellman algorithm In group theory, the Pohlig–Hellman algorithm, sometimes credited as the Silver–Pohlig–Hellman algorithm, is a special-purpose algorithm for computing discrete logarithms in a finite abelian group whose order is a smooth integer. The algorithm was introduced by Roland Silver, but first published by Stephen Pohlig and Martin Hellman (independent of Silver). Document 2::: Discrete Logarithm In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. Several important algorithms in public-key cryptography, such as ElGamal, base their security on the assumption that the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution. Document 3::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,} Document 4::: Discrete log problem Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times: b k = b ⋅ b ⋯ b ⏟ k factors . Document 5::: Blum–Micali algorithm The Blum–Micali algorithm is a cryptographically secure pseudorandom number generator. The algorithm gets its security from the difficulty of computing discrete logarithms.Let p {\displaystyle p} be an odd prime, and let g {\displaystyle g} be a primitive root modulo p {\displaystyle p} . Let x 0 {\displaystyle x_{0}} be a seed, and let x i + 1 = g x i mod p {\displaystyle x_{i+1}=g^{x_{i}}\ {\bmod {\ p}}} . The i {\displaystyle i} th output of the algorithm is 1 if x i ≤ p − 1 2 {\displaystyle x_{i}\leq {\frac {p-1}{2}}} .
epfl-collab
Tick the \textbf{\emph{incorrect}} assertion.
['$PSPACE\\subseteq IP$.', '$NP\\mbox{-hard} \\subset P$.', '$P\\subseteq NP$.', '$NP\\subseteq IP$.']
B
null
Document 1::: Talk:Fibonacci sequence WP:CITEVAR is very clear that you should not be changing citation styles in this way without consensus. For those of us who use User:BrandonXLF/CitationStyleMarker.js to find inconsistent citation styles, your change is very annoying because it causes all of the citations to be flagged as inconsistent. Also your claim that this is helpful for bots and error checking seems dubious to me. Document 2::: Kinetic proofreading Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways.Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further. Document 3::: Attacking Faulty Reasoning Attacking Faulty Reasoning is a textbook on logical fallacies by T. Edward Damer that has been used for many years in a number of college courses on logic, critical thinking, argumentation, and philosophy. It explains 60 of the most commonly committed fallacies. Each of the fallacies is concisely defined and illustrated with several relevant examples. For each fallacy, the text gives suggestions about how to address or to "attack" the fallacy when it is encountered. The organization of the fallacies comes from the author’s own fallacy theory, which defines a fallacy as a violation of one of the five criteria of a good argument: the argument must be structurally well-formed; the premises must be relevant; the premises must be acceptable; the premises must be sufficient in number, weight, and kind; there must be an effective rebuttal of challenges to the argument.Each fallacy falls into at least one of Damer's five fallacy categories, which derive from the above criteria. Document 4::: Overhead bar Marking one or more words with a continuous line above the characters is sometimes called overstriking, though overstriking generally refers to printing one character on top of an already-printed character. An overline, that is, a single line above a chunk of text, should not be confused with the macron, a diacritical mark placed above (or sometimes below) individual letters. The macron is narrower than the character box. Document 5::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity.
epfl-collab
Tick the \emph{correct} statement. $\Sigma$-protocols \ldots
['respect the property of zero-knowledge for any verifier.', 'consist of protocols between a prover and a verifier, where the verifier is polynomially bounded.', 'are defined for any language in \\textrm{PSPACE}.', 'have a polynomially unbounded extractor that can yield a witness.']
B
null
Document 1::: Security protocol notation In cryptography, security (engineering) protocol notation, also known as protocol narrations and Alice & Bob notation, is a way of expressing a protocol of correspondence between entities of a dynamic system, such as a computer network. In the context of a formal model, it allows reasoning about the properties of such a system. The standard notation consists of a set of principals (traditionally named Alice, Bob, Charlie, and so on) who wish to communicate. They may have access to a server S, shared keys K, timestamps T, and can generate nonces N for authentication purposes. Document 2::: Security protocol notation Some authors consider the notation used by Steiner, Neuman, & Schiller as a notable reference.Several models exist to reason about security protocols in this way, one of which is BAN logic. Security protocol notation inspired many of the programming languages used in choreographic programming. == References == Document 3::: 5 sigma In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is or, by using summation notation, If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this case, the standard deviation will be Document 4::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 5::: Vector clocks Each time a process experiences an internal event, it increments its own logical clock in the vector by one. For instance, upon an event at process i {\displaystyle i} , it updates V C i ← V C i + 1 {\displaystyle VC_{i}\leftarrow VC_{i}+1} . Each time a process sends a message, it increments its own logical clock in the vector by one (as in the bullet above, but not twice for the same event) then it pairs the message with a copy of its own vector and finally sends the pair. Each time a process receives a message-vector clock pair, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the value in its own vector clock and the value in the vector in the received pair (for every element). For example, if process P i {\displaystyle P_{i}} receives a message ( m , V C j ) {\displaystyle (m,VC_{j})} from P j {\displaystyle P_{j}} , it first increments its own logical clock in the vector by one V C i ← V C i + 1 {\displaystyle VC_{i}\leftarrow VC_{i}+1} and then updates its entire vector by setting V C i ← max ( V C i , V C j ) , ∀ k {\displaystyle VC_{i}\leftarrow \max(VC_{i},VC_{j}),\forall k} .
epfl-collab
Which defense(s) highlight the principle of least privilege in software security?
['DEP bits by disallowing execution on certain memory pages because code is restricted to code pages.', 'CFI protection on the forward edge because the check limits reachable targets.', 'A stack canary because it will signal any stack-based attack.', 'Applying updates regularly because software updates always reduce privileges.']
A
null
Document 1::: Protected procedure In computer science, the concept of protected procedure, first introduced as protected service routine in 1965, is necessary when two computations A and B use the same routine S; a protected procedure is such if makes not possible for a malfunction of one of the two computation to cause incorrect execution to the other.One of the most important aspects of Dennis and Van Horn (hypothetical) system "supervisor" was the inclusion of a description of protected procedure.In a global environment system (where there's some shared variable), the protected procedure mechanism allows the enforcement of the principle of least privilege and the avoidance of side effects in resources management (see Denning principles). Document 2::: Logical security Logical security consists of software safeguards for an organization's systems, including user identification and password access, authenticating, access rights and authority levels. These measures are to ensure that only authorized users are able to perform actions or access information in a network or a workstation. It is a subset of computer security. Document 3::: Defense in depth (computing) Defense in depth is a concept used in information security in which multiple layers of security controls (defense) are placed throughout an information technology (IT) system. Its intent is to provide redundancy in the event a security control fails or a vulnerability is exploited that can cover aspects of personnel, procedural, technical and physical security for the duration of the system's life cycle. Document 4::: Separation of protection and security In computer sciences, the separation of protection and security is a design choice. Wulf et al. identified protection as a mechanism and security as a policy, therefore making the protection-security distinction a particular case of the separation of mechanism and policy principle. Many frameworks consider both as security controls of varying types. For example, protection mechanisms would be considered technical controls, while a policy would be considered an administrative control. Document 5::: Software forensics Software forensics is the science of analyzing software source code or binary code to determine whether intellectual property infringement or theft occurred. It is the centerpiece of lawsuits, trials, and settlements when companies are in dispute over issues involving software patents, copyrights, and trade secrets. Software forensics tools can compare code to determine correlation, a measure that can be used to guide a software forensics expert.
epfl-collab
For which kind of bugs does default LLVM provide sanitizers?
['Buffer overflows', 'Race conditions between threads', 'Logic bugs', 'Memory leaks']
D
null
Document 1::: Segmentation violation Processes can in some cases install a custom signal handler, allowing them to recover on their own, but otherwise the OS default signal handler is used, generally causing abnormal termination of the process (a program crash), and sometimes a core dump. Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access. Document 2::: Segmentation violation Another type of memory access error is a bus error, which also has various causes, but is today much rarer; these occur primarily due to incorrect physical memory addressing, or due to misaligned memory access – these are memory references that the hardware cannot address, rather than references that a process is not allowed to address. Many programming languages may employ mechanisms designed to avoid segmentation faults and improve memory safety. For example, Rust employs an ownership-based model to ensure memory safety. Other languages, such as Lisp and Java, employ garbage collection, which avoids certain classes of memory errors that could lead to segmentation faults. Document 3::: Stale pointer bug A stale pointer bug, otherwise known as an aliasing bug, is a class of subtle programming errors that can arise in code that does dynamic memory allocation, especially via the malloc function or equivalent. If several pointers address (are "aliases for") a given chunk of storage, it may happen that the storage is freed or reallocated (and thus moved) through one alias and then referenced through another, which may lead to subtle (and possibly intermittent) errors depending on the state and the allocation history of the malloc arena. This bug can be avoided by never creating aliases for allocated memory, by controlling the dynamic scope of references to the storage so that none can remain when it is freed, or by use of a garbage collector, in the form of an intelligent memory-allocation library or as provided by higher-level languages, such as Lisp. The term "aliasing bug" is nowadays associated with C programming, but it was already in use in a very similar sense in the ALGOL 60 and Fortran programming language communities in the 1960s. Document 4::: Microarchitectural Data Sampling The Microarchitectural Data Sampling (MDS) vulnerabilities are a set of weaknesses in Intel x86 microprocessors that use hyper-threading, and leak data across protection boundaries that are architecturally supposed to be secure. The attacks exploiting the vulnerabilities have been labeled Fallout, RIDL (Rogue In-Flight Data Load), ZombieLoad., and ZombieLoad 2. Document 5::: Defaults (software) defaults is a command line utility that manipulates plist files. Introduced in 1998 OPENSTEP, defaults is found in the system's descendants macOS and GNUstep.The name "defaults" derives from OpenStep's name for user preferences, Defaults, or NSUserDefaults in Foundation Kit. Each application had its own defaults plist ("domain"), under ~/Defaults for the user configuration and /Defaults for the system configuration. The lookup system also supports a NSGlobalDomain.plist, where defaults written there will be seen by all applications. In macOS, the Defaults part of the path is replaced by the more intuitive Library/Preferences. defaults accesses the plists based on the domain given.defaults is also able to read and write any plist specified with a path, although Apple plans to phase out this utility in a future version.
epfl-collab
Which of the following hold(s) true about update deployment in the secure development lifecycle?
['Updates may bring new code that may be buggy, so additional\n monitoring is required after deploying an update.', 'You should always deploy third party updates automatically\n and immediately in your project.', 'One motivation for automatic updates is for manufacturers to\n ensure that users have the latest code installed.', 'Not allowing rolling back to previous versions is necessary\n in the Secure Development Lifecycle.']
A
null
Document 1::: Slipstream (computing) Even when the source code is available, patching makes possible the installation of small changes to the object program without the need to recompile or reassemble. For minor changes to software, it is often easier and more economical to distribute patches to users rather than redistributing a newly recompiled or reassembled program. Although meant to fix problems, poorly designed patches can sometimes introduce new problems (see software regressions). In some special cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Document 2::: Continuous Delivery A straightforward and repeatable deployment process is important for continuous delivery. Continuous delivery contrasts with continuous deployment (also abbreviated CD), a similar approach in which software is also produced in short cycles but through automated deployments even to production rather than requiring a "click of a button" for that last step. : 52 As such, continuous deployment can be viewed as a more complete form of automation than continuous delivery. Document 3::: Software quality Software Assurance (SA) covers both the property and the process to achieve it: confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures Document 4::: Continuous Delivery Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, following a pipeline through a "production-like environment", without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. Document 5::: Web operations After engineering had built a software product, and QA had verified it as correct, it would be handed to a support staff to operate the working software. Such a view assumed that software was mostly immutable in production and that usage would be mostly stable. Increasingly, "a web application involves many specialists, but it takes people in web ops to ensure that everything works together throughout an application's lifetime."
epfl-collab
Current software is complex and often relies on external dependencies. What are the security implications?
['During the requirement phase of the secure development\n lifecycle, a developer must list all the required dependencies.', 'Closed source code is more secure than open source code as it\n prohibits other people from finding security bugs.', 'As most third party software is open source, it is safe by\n default since many people reviewed it.', 'It is necessary to extensively security test every executable\n on a system before putting it in production.']
A
null
Document 1::: Software dependency In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets.A library is also a collection of implementations of behavior, written in terms of a language, that has a well-defined interface by which the behavior is invoked. Document 2::: Software dependency In that case, there may be internal libraries that are reused by independent sub-portions of the large program. The distinguishing feature is that a library is organized for the purposes of being reused by independent programs or sub-programs, and the user only needs to know the interface and not the internal details of the library. The value of a library lies in the reuse of standardized program elements. Document 3::: Legacy systems Legacy code may be written in programming languages, use frameworks and external libraries, or use architecture and patterns that are no longer considered modern, increasing the mental burden and ramp-up time for software engineers who work on the codebase. Legacy code may have zero or insufficient automated tests, making refactoring dangerous and likely to introduce bugs. Long-lived code is susceptible to software rot, where changes to the runtime environment, or surrounding software or hardware may require maintenance or emulation of some kind to keep working. Document 4::: Software dependency Most compiled languages have a standard library, although programmers can also create their own custom libraries. Most modern software systems provide libraries that implement the majority of the system services. Such libraries have organized the services which a modern application requires. As such, most code used by modern applications is provided in these system libraries. Document 5::: Check Point Integrity A number of destructive worms that followed, and the subsequent rise of spyware as a significant problem, continued to increase demand for endpoint security products. Data privacy and integrity regulations and required security audits mandated by governmental and professional authorities, along with infections and damage caused by guest PC access, have also prompted use of such security software. Competitors include Symantec/Sygate, Cisco Security Agent, McAfee Entercept, and even point products like Determina's Memory Firewall.
epfl-collab
Daemons are just long running processes. When applying mitigations to these processes, several aspects change. Which ones?
['CFI becomes less effective as the concurrent clients cause\n more targets to be available.', 'Stack canaries become less effective as multiple requests are\n handled by the same thread.', 'DEP becomes less effective as compiler optimizations are\n turned on, allowing the attacker to inject new code.', 'ASLR becomes less effective as multiple requests across\n different users are handled in a single process.']
D
null
Document 1::: Resource leak In other cases resource leaks can be a major problem, causing resource starvation and severe system slowdown or instability, crashing the leaking process, other processes, or even the system. Resource leaks often go unnoticed under light load and short runtimes, and these problems only manifest themselves under heavy system load or systems that remain running for long periods of time.Resource leaks are particularly a problem for resources available in very low quantities. Leaking a unique resource, such as a lock, is particularly serious, as this causes immediate resource starvation (it prevents other processes from acquiring it) and causes deadlock. Intentionally leaking resources can be used in a denial-of-service attack, such as a fork bomb, and thus resource leaks present a security bug. Document 2::: Mitigation Mitigation is the reduction of something harmful or the reduction of its harmful effects. It may refer to measures taken to reduce the harmful effects of hazards that remain in potentia, or to manage harmful incidents that have already occurred. It is a stage or component of emergency management and of risk management. The theory of mitigation is a frequently used element in criminal law and is often used by a judge to try cases such as murder, where a perpetrator is subject to varying degrees of responsibility as a result of one's actions. Document 3::: Delayed allocation This has the effect of batching together allocations into larger runs. Such delayed processing reduces CPU usage, and tends to reduce disk fragmentation, especially for files which grow slowly. Document 4::: Observer effect (information technology) In information technology, the observer effect is the impact on the behaviour of a computer process caused by the act of observing the process while it is running. For example: if a process uses a log file to record its progress, the process could slow down. Furthermore, the act of viewing the file while the process is running could cause an I/O error in the process, which could, in turn, cause it to stop. Another example would be observing the performance of a CPU by running both the observed and observing programs on the same CPU, which will lead to inaccurate results because the observer program itself affects the CPU performance (modern, heavily cached and pipelined CPUs are particularly affected by this kind of observation). Document 5::: Process isolation Process isolation is a set of different hardware and software technologies designed to protect each process from other processes on the operating system. It does so by preventing process A from writing to process B. Process isolation can be implemented with virtual address space, where process A's address space is different from process B's address space – preventing A from writing onto B. Security is easier to enforce by disallowing inter-process memory access, in contrast with less secure architectures such as DOS in which any process can write to any memory in any other process.
epfl-collab
Which of the following apply to recent Android-based mobile systems but not to Linux-based desktop systems?
['By default, each app runs as its own user.', 'Arbitrary apps can exchange files through shared\n directories.', 'All apps run in a strict container with only limited system\n calls available.', 'Apps should use the binder interface to communicate with other\n apps.']
D
null
Document 1::: KDE Connect KDE Connect is a multi-platform application developed by KDE, which facilitates wireless communications and data transfer between devices over local networks. KDE Connect is available in the repositories of many Linux Distributions and F-Droid, Google Play Store for Android. Often, distributions bundle KDE Connect in their KDE Plasma desktop variant. KDE Connect has been reimplemented in the GNOME desktop environment as GSConnect, which can be obtained from Gnome Extension Store. Document 2::: Android rooting Rooting is the process by which users of Android devices can attain privileged control (known as root access) over various subsystems of the device, usually smartphones. Because Android is based on a modified version of the Linux kernel, rooting an Android device gives similar access to administrative (superuser) permissions as on Linux or any other Unix-like operating system such as FreeBSD or macOS. Rooting is often performed to overcome limitations that carriers and hardware manufacturers put on some devices. Thus, rooting gives the ability (or permission) to alter or replace system applications and settings, run specialized applications ("apps") that require administrator-level permissions, or perform other operations that are otherwise inaccessible to a normal Android user. Document 3::: Comparison of operating systems These tables provide a comparison of operating systems, of computer devices, as listing general and technical information for a number of widely used and currently available PC or handheld (including smartphone and tablet computer) operating systems. The article "Usage share of operating systems" provides a broader, and more general, comparison of operating systems that includes servers, mainframes and supercomputers. Because of the large number and variety of available Linux distributions, they are all grouped under a single entry; see comparison of Linux distributions for a detailed comparison. There is also a variety of BSD and DOS operating systems, covered in comparison of BSD operating systems and comparison of DOS operating systems. Document 4::: Android Runtime Android Runtime (ART) is an application runtime environment used by the Android operating system. Replacing Dalvik, the process virtual machine originally used by Android, ART performs the translation of the application's bytecode into native instructions that are later executed by the device's runtime environment. Document 5::: Android rooting On some devices, rooting can also facilitate the complete removal and replacement of the device's operating system, usually with a more recent release of its current operating system. Root access is sometimes compared to jailbreaking devices running the Apple iOS operating system. However, these are different concepts: Jailbreaking is the bypass of several types of Apple prohibitions for the end user, including modifying the operating system (enforced by a "locked bootloader"), installing non-officially approved (not available on the App Store) applications via sideloading, and granting the user elevated administration-level privileges (rooting).
epfl-collab
Which of the following attack vectors apply to mobile Android systems?
['Hardware vendors like \\$am\\$ung are primarily interested in making\n money and not in providing software updates, resulting in outdated\n software that is vulnerable to attacks.', 'Malicious apps can intercept network traffic of benign apps.', 'Apps may maliciously declare intent filters to receive intents\n from benign apps.', 'Overprivileged apps may be abused as a confused deputy, allowing\n malicious apps to steal access to their privileges.']
C
null
Document 1::: Attack vector In computer security, an attack vector is a specific path, method, or scenario that can be exploited to break into an IT system, thus compromising its security. The term was derived from the corresponding notion of vector in biology. An attack vector may be exploited manually, automatically, or through a combination of manual and automatic activity. Often, this is a multi-step process. Document 2::: Attack vector For instance, malicious code (code that the user did not consent to being run and that performs actions the user would not consent to) often operates by being added to a harmless seeming document made available to an end user. When the unsuspecting end user opens the document, the malicious code in question (known as the payload) is executed and performs the abusive tasks it was programmed to execute, which may include things such as spreading itself further, opening up unauthorized access to the IT system, stealing or encrypting the user's documents, etc. In order to limit the chance of discovery once installed, the code in question is often obfuscated by layers of seemingly harmless code.Some common attack vectors: exploiting buffer overflows; this is how the Blaster worm was able to propagate. exploiting webpages and email supporting the loading and subsequent execution of JavaScript or other types of scripts without properly limiting their powers. exploiting networking protocol flaws to perform unauthorized actions at the other end of a network connection. phishing: sending deceptive messages to end users to entice them to reveal confidential information, such as passwords. Document 3::: KRACK The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable. The vulnerability affects all major software platforms, including Microsoft Windows, macOS, iOS, Android, Linux, OpenBSD and others.The widely used open-source implementation wpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zeros encryption key, effectively nullifying WPA2 protection in a man-in-the-middle attack. Version 2.7 fixed this vulnerability.The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept sent and received data. Document 4::: Android rooting Rooting is the process by which users of Android devices can attain privileged control (known as root access) over various subsystems of the device, usually smartphones. Because Android is based on a modified version of the Linux kernel, rooting an Android device gives similar access to administrative (superuser) permissions as on Linux or any other Unix-like operating system such as FreeBSD or macOS. Rooting is often performed to overcome limitations that carriers and hardware manufacturers put on some devices. Thus, rooting gives the ability (or permission) to alter or replace system applications and settings, run specialized applications ("apps") that require administrator-level permissions, or perform other operations that are otherwise inaccessible to a normal Android user. Document 5::: Kali NetHunter Kali NetHunter is a free and open-source mobile penetration testing platform for Android devices, based on Kali Linux. Kali NetHunter is available for un-rooted devices (NetHunter Rootless), for rooted devices that have a standard recovery (NetHunter Lite), and for rooted devices with custom recovery for which a NetHunter specific kernel is available (NetHunter). Official images are published by Offensive Security on their download page and are updated every quarter. NetHunter images with custom kernels are published for the most popular supported devices, such as Google Nexus, Samsung Galaxy and OnePlus. Many more models are supported, and images not published by Offensive Security can be generated using NetHunter build scripts. Kali NetHunter is maintained by a community of volunteers, and is funded by Offensive Security.
epfl-collab
Does AddressSanitizer prevent \textbf{all} use-after-free bugs?
['Yes, because free’d memory is unmapped and accesses therefore\n cause segmentation faults.', 'No, because quarantining free’d memory chunks forever prevents\n legit memory reuse and could potentially lead to out-of-memory\n situations.', 'Yes, because free’d memory chunks are poisoned.', "No, because UAF detection is not part of ASan's feature set."]
B
null
Document 1::: Heap corruption Using memory beyond the memory that was allocated (buffer overflow): If an array is used in a loop, with incorrect terminating condition, memory beyond the array bounds may be accidentally manipulated. Buffer overflow is one of the most common programming flaws exploited by computer viruses, causing serious computer security issues (e.g. return-to-libc attack, stack-smashing protection) in widely used programs. In some cases programs can also incorrectly access the memory before the start of a buffer. Faulty heap memory management: Memory leaks and freeing non-heap or un-allocated memory are the most frequent errors caused by faulty heap memory management.Many memory debuggers such as Purify, Valgrind, Insure++, Parasoft C/C++test, AddressSanitizer are available to detect memory corruption errors. Document 2::: Fragile binary interface problem The fragile binary interface problem or FBI is a shortcoming of certain object-oriented programming language compilers, in which internal changes to an underlying class library can cause descendant libraries or programs to cease working. It is an example of software brittleness. This problem is more often called the fragile base class problem or FBC; however, that term has a wider sense. Document 3::: Comparison of HTML parsers ** sanitize (generating standard-compatible web-page, reduce spam, etc.) and clean (strip out surplus presentational tags, remove XSS code, etc.) HTML code. *** Updates HTML4.X to XHTML or to HTML5, converting deprecated tags (ex. CENTER) to valid ones (ex. DIV with style="text-align:center;"). == References == Document 4::: Stale pointer bug A stale pointer bug, otherwise known as an aliasing bug, is a class of subtle programming errors that can arise in code that does dynamic memory allocation, especially via the malloc function or equivalent. If several pointers address (are "aliases for") a given chunk of storage, it may happen that the storage is freed or reallocated (and thus moved) through one alias and then referenced through another, which may lead to subtle (and possibly intermittent) errors depending on the state and the allocation history of the malloc arena. This bug can be avoided by never creating aliases for allocated memory, by controlling the dynamic scope of references to the storage so that none can remain when it is freed, or by use of a garbage collector, in the form of an intelligent memory-allocation library or as provided by higher-level languages, such as Lisp. The term "aliasing bug" is nowadays associated with C programming, but it was already in use in a very similar sense in the ALGOL 60 and Fortran programming language communities in the 1960s. Document 5::: Segmentation violation Another type of memory access error is a bus error, which also has various causes, but is today much rarer; these occur primarily due to incorrect physical memory addressing, or due to misaligned memory access – these are memory references that the hardware cannot address, rather than references that a process is not allowed to address. Many programming languages may employ mechanisms designed to avoid segmentation faults and improve memory safety. For example, Rust employs an ownership-based model to ensure memory safety. Other languages, such as Lisp and Java, employ garbage collection, which avoids certain classes of memory errors that could lead to segmentation faults.
epfl-collab
For security reasons, you accept the performance and memory overhead introduced by common sanitizers and deploy them in your user-facing production server software. Assuming that all memory safety bugs in your software are detected by the sanitizers, which of the following properties do the sanitizers provide to your code?
['Accountability of accesses to the program', 'Availability of the program', 'Confidentiality of the program data', 'Integrity of the program data']
C
null
Document 1::: Memory protection Protection may encompass all accesses to a specified area of memory, write accesses, or attempts to execute the contents of the area. An attempt to access unauthorized memory results in a hardware fault, e.g., a segmentation fault, storage violation exception, generally causing abnormal termination of the offending process. Memory protection for computer security includes additional techniques such as address space layout randomization and executable space protection. Document 2::: Heap corruption Using memory beyond the memory that was allocated (buffer overflow): If an array is used in a loop, with incorrect terminating condition, memory beyond the array bounds may be accidentally manipulated. Buffer overflow is one of the most common programming flaws exploited by computer viruses, causing serious computer security issues (e.g. return-to-libc attack, stack-smashing protection) in widely used programs. In some cases programs can also incorrectly access the memory before the start of a buffer. Faulty heap memory management: Memory leaks and freeing non-heap or un-allocated memory are the most frequent errors caused by faulty heap memory management.Many memory debuggers such as Purify, Valgrind, Insure++, Parasoft C/C++test, AddressSanitizer are available to detect memory corruption errors. Document 3::: Memory protection Memory protection is a way to control memory access rights on a computer, and is a part of most modern instruction set architectures and operating systems. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug or malware within a process from affecting other processes, or the operating system itself. Document 4::: OS kernel In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel. Document 5::: Process isolation Process isolation is a set of different hardware and software technologies designed to protect each process from other processes on the operating system. It does so by preventing process A from writing to process B. Process isolation can be implemented with virtual address space, where process A's address space is different from process B's address space – preventing A from writing onto B. Security is easier to enforce by disallowing inter-process memory access, in contrast with less secure architectures such as DOS in which any process can write to any memory in any other process.
epfl-collab
What is/are the goal/s of compartmentalization?
['Make faults more severe as the surrounding code is smaller.', 'Better performance (i.e., lower overhead) since a compartment\n can fail without affecting others.', 'Isolate faults to individual (ideally small) components.', 'Allow easier abstraction of functionalities across components.']
C
null
Document 1::: In vitro compartmentalization In vitro compartmentalization (IVC) is an emulsion-based technology that generates cell-like compartments in vitro. These compartments are designed such that each contains no more than one gene. When the gene is transcribed and/or translated, its products (RNAs and/or proteins) become 'trapped' with the encoding gene inside the compartment. By coupling the genotype (DNA) and phenotype (RNA, protein), compartmentalization allows the selection and evolution of phenotype. Document 2::: Compartment (chemistry) In chemistry, a compartment is a part of a protein that serves a specific function. They are essentially protein subunits with the added condition that a compartment has distinct functionality, rather than being just a structural component. There may be multiple compartments on one and the same protein. One example is the case of Pyruvate dehydrogenase complex. Document 3::: Compartment (pharmacokinetics) In pharmacokinetics, a compartment is a defined volume of body fluids, typically of the human body, but also those of other animals with multiple organ systems. The meaning in this area of study is different from the concept of anatomic compartments, which are bounded by fasciae, the sheath of fibrous tissue that enclose mammalian organs. Instead, the concept focuses on broad types of fluidic systems. This analysis is used in attempts to mathematically describe distribution of small molecules throughout organisms with multiple compartments. Document 4::: Compartment (development) Compartments can be simply defined as separate, different, adjacent cell populations, which upon juxtaposition, create a lineage boundary. This boundary prevents cell movement from cells from different lineages across this barrier, restricting them to their compartment. Subdivisions are established by morphogen gradients and maintained by local cell-cell interactions, providing functional units with domains of different regulatory genes, which give rise to distinct fates. Compartment boundaries are found across species. Document 5::: Levels of organization The basic principle behind the organisation is the concept of emergence—the properties and functions found at a hierarchical level are not present and irrelevant at the lower levels. The biological organisation of life is a fundamental premise for numerous areas of scientific research, particularly in the medical sciences. Without this necessary degree of organisation, it would be much more difficult—and likely impossible—to apply the study of the effects of various physical and chemical phenomena to diseases and physiology (body function).
epfl-collab
Which of the following statements about code instrumentation is/are correct?
['We should instrument basic blocks when collecting edge coverage.', 'We can only do binary rewriting on position-independent code (PIC).', 'The instrumentation code for coverage collection should not\n change the original functionality.', 'Binary rewriting-based coverage collection has lower runtime\n overheads than compiler-based instrumentation.']
A
null
Document 1::: Instrumentation (computer programming) In the context of computer programming, instrumentation refers to the measure of a product's performance, in order to diagnose errors and to write trace information. Instrumentation can be of two types: source instrumentation and binary instrumentation. Document 2::: Profiling (computer programming) In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler (or code profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Document 3::: Code coverage In software engineering, code coverage is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high test coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage. Document 4::: Data Acquisition Analog-to-digital converters, to convert conditioned sensor signals to digital values.Data acquisition applications are usually controlled by software programs developed using various general purpose programming languages such as Assembly, BASIC, C, C++, C#, Fortran, Java, LabVIEW, Lisp, Pascal, etc. Stand-alone data acquisition systems are often called data loggers. There are also open-source software packages providing all the necessary tools to acquire data from different, typically specific, hardware equipment. These tools come from the scientific community where complex experiment requires fast, flexible, and adaptable software. Those packages are usually custom-fit but more general DAQ packages like the Maximum Integrated Data Acquisition System can be easily tailored and are used in several physics experiments. Document 5::: Software performance testing In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system.
epfl-collab
Which of the following statements about libFuzzer is/are correct?
['libFuzzer can only test single-threaded targets.', 'It is better to put narrow targets into the fuzzing stubs, e.g.,\n if a target can parse several data formats, split it into several\n targets, one per format.', 'Unit tests may serve as foundation to create libFuzzer fuzzing\n stubs.', 'In libFuzzer’s default mode (not fork-mode), the tested APIs\n must not contain \\texttt{exit()}.']
C
null
Document 1::: Lempel–Ziv–Oberhumer Lempel–Ziv–Oberhumer (LZO) is a lossless data compression algorithm that is focused on decompression speed. Document 2::: Lempel–Ziv–Welch Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement and has the potential for very high throughput in hardware implementations. It is the algorithm of the Unix file compression utility compress and is used in the GIF image format. Document 3::: Hexafluorobenzene Hexafluorobenzene, HFB, C6F6, or perfluorobenzene is an organofluorine compound. In this derivative of benzene, all hydrogen atoms have been replaced by fluorine atoms. The technical uses of the compound are limited, although it has some specialized uses in the laboratory owing to distinctive spectroscopic properties. Document 4::: Liebig condenser The Liebig condenser (, LEE-big) or straight condenser is a piece of laboratory equipment, specifically a condenser consisting of a straight glass tube surrounded by a water jacket. In typical laboratory operation, such as distillation, the condenser is clamped to a retort stand in vertical or oblique orientation. The hot vapor of some liquid is introduced at the upper end of the inner tube, and condenses in contact with its colder walls. Document 5::: Leibniz Supercomputing Centre The Leibniz Supercomputing Centre (LRZ) (German: Leibniz-Rechenzentrum) is a supercomputing centre on the Campus Garching near Munich, operated by the Bavarian Academy of Sciences and Humanities. Among other IT services, it provides supercomputer resources for research and access to the Munich Scientific Network (MWN); it is connected to the Deutsches Forschungsnetz with a 24 Gbit/s link. The centre is named after Gottfried Wilhelm Leibniz. It was founded in 1962 by Hans Piloty and Robert Sauer as part of the Bavarian Academy of Sciences and Humanities and the host for several world leading supercomputers (HLRB, HLRB-II, SuperMUC).
epfl-collab
Which of the following statements about symbolic execution is/are correct?
['State space explosion is a common challenge for symbolic\n execution.', 'Symbolic execution requires actually running the target\n program.', 'Symbolic execution can efficiently handle and solve constraints\n in programs with simple logics but large input space.', "Symbolic execution can always accurately model a system's\n environment (e.g., system calls, file I/O, and network I/O)."]
C
null
Document 1::: Symbolic execution In computer science, symbolic execution (also symbolic evaluation or symbex) is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints. The field of symbolic simulation applies the same concept to hardware. Symbolic computation applies the concept to the analysis of mathematical expressions. Document 2::: Symbolic language (programming) In computer science, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations and the entities (or operands) on which these operations are performed.Modern programming languages use symbols to represent concepts and/or data and are therefore, examples of symbolic languages.Some programming languages (such as Lisp and Mathematica) make it easy to represent higher-level abstractions as expressions in the language, enabling symbolic programming., Document 3::: Symbolic programming In computer programming, symbolic programming is a programming paradigm in which the program can manipulate its own formulas and program components as if they were plain data.Through symbolic programming, complex processes can be developed that build other more intricate processes by combining smaller units of logic or functionality. Thus, such programs can effectively modify themselves and appear to "learn", which makes them better suited for applications such as artificial intelligence, expert systems, natural language processing, and computer games. Languages that support symbolic programming include homoiconic languages such as Wolfram Language, LISP and Prolog. Document 4::: Structured program theorem The structured program theorem, also called the Böhm–Jacopini theorem, is a result in programming language theory. It states that a class of control-flow graphs (historically called flowcharts in this context) can compute any computable function if it combines subprograms in only three specific ways (control structures). These are Executing one subprogram, and then another subprogram (sequence) Executing one of two subprograms according to the value of a boolean expression (selection) Repeatedly executing a subprogram as long as a boolean expression is true (iteration)The structured chart subject to these constraints, particularly the loop constraint implying a single exit (as described later in this article), may however use additional variables in the form of bits (stored in an extra integer variable in the original proof) in order to keep track of information that the original program represents by the program location. The construction was based on Böhm's programming language P′′. The theorem forms the basis of structured programming, a programming paradigm which eschews goto commands and exclusively uses subroutines, sequences, selection and iteration. Document 5::: Execution semantics ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet are often called programming languages. However, some authors restrict the term "programming language" to Turing complete languages.Another usage regards programming languages as theoretical constructs for programming abstract machines and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources. John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.
epfl-collab
Which of the following statements about coverage-guided fuzzing is/are correct?
['Redundant seeds in the corpus will reduce fuzzing efficiency.', 'Due to the coverage feedback, a small random perturbation of a\n seed can have a significant impact on further exploration.', 'Counting the number of times the covered code has been executed\n provides a more fine-grained view of program behavior than only\n "covered/not covered" binary code coverage.', 'Fuzzers that have higher code coverage always find more\n bugs.']
A
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 3::: Intelligent verification Automatically tracking paths through design structure to coverage points, to create new tests. Ensuring that various aspects of the design are only verified once in the same test sets. Scaling the test automatically for different hardware and software configurations of a system. Support for different verification methodologies like constrained random, directed, graph-based, use-case based in the same tool. "Intelligent Verification" uses existing logic simulation testbenches, and automatically targets and maximizes the following types of design coverage: Code coverage Branch coverage Expression coverage Functional coverage Assertion coverage Document 4::: Intelligent verification With automated coverage feedback, the test description is automatically adjusted to target design functionality that has not been previously verified (or "covered") by other tests existing tests. A key property of automated coverage feedback is that, given the same test environment, the software will automatically change the tests to improve functional design coverage in response to changes in the design. Newer intelligent verification tools are able to derive the essential functions one would expect of a testbench (stimulus, coverage, and checking) from a single, compact, high-level model. Document 5::: Disk-covering method A disk-covering method is a divide-and-conquer meta-technique for large-scale phylogenetic analysis which has been shown to improve the performance of both heuristics for NP-hard optimization problems and polynomial-time distance-based methods. Disk-covering methods are a meta-technique in that they have flexibility in several areas, depending on the performance metrics that are being optimized for the base method. Such metrics can be efficiency, accuracy, or sequence length requirements for statistical performance. There have been several disk-covering methods developed, which have been applied to different "base methods".
epfl-collab
Which of the following statements about fuzzing is/are correct?
['Greybox fuzzing is always the better alternative to\n blackbox fuzzing.', 'Blackbox fuzzers can make use of initial seeds.', 'Greybox fuzzing keeps track of concrete program paths to\n abstract behavior.', 'Generational fuzzing requires more manual work (to specify the\n generator policies) than mutational fuzzing, but can generate\n high-quality seeds.']
D
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 3::: FuzzyCLIPS The system uses two basic inexact concepts, fuzziness and uncertainty. It has provided a useful environment for developing fuzzy applications but it does require significant effort to update and maintain as new versions of CLIPS are released. Document 4::: Combs method The Combs method is a rule base reduction method of writing fuzzy logic rules described by William E. Combs in 1997. It is designed to prevent combinatorial explosion in fuzzy logic rules.The Combs method takes advantage of the logical equality ( ( p ∧ q ) ⇒ r ) ⟺ ( ( p ⇒ r ) ∨ ( q ⇒ r ) ) {\displaystyle ((p\land q)\Rightarrow r)\iff ((p\Rightarrow r)\lor (q\Rightarrow r))} . Document 5::: Fuzzy extractor So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than ϵ {\displaystyle \epsilon } . As long as this distance is sufficiently small, the security will remain adequate.
epfl-collab
Which of the following statements about mitigations are correct?
['Control-Flow Integrity can efficiently protect the forward edge\n but, when using target sets, is limited on the backward edge', 'Code-Pointer Integrity (specifically the implementation\n described in the slides) uses a separate stack to protect code\n pointers.', 'Shadow stacks can be implemented in software with zero\n overhead.', 'Safe stacks protect against corruption of all data on the\n stack.']
A
null
Document 1::: Mitigation Mitigation is the reduction of something harmful or the reduction of its harmful effects. It may refer to measures taken to reduce the harmful effects of hazards that remain in potentia, or to manage harmful incidents that have already occurred. It is a stage or component of emergency management and of risk management. The theory of mitigation is a frequently used element in criminal law and is often used by a judge to try cases such as murder, where a perpetrator is subject to varying degrees of responsibility as a result of one's actions. Document 2::: Environmental mitigation Environmental mitigation, compensatory mitigation, or mitigation banking, are terms used primarily by the United States government and the related environmental industry to describe projects or programs intended to offset known impacts to an existing historic or natural resource such as a stream, wetland, endangered species, archeological site, paleontological site or historic structure. To "mitigate" means to make less harsh or hostile. Environmental mitigation is typically a part of an environmental crediting system established by governing bodies which involves allocating debits and credits. Document 3::: Risk mitigation Mitigation is the reduction of something harmful or the reduction of its harmful effects. It may refer to measures taken to reduce the harmful effects of hazards that remain in potentia, or to manage harmful incidents that have already occurred. It is a stage or component of emergency management and of risk management. The theory of mitigation is a frequently used element in criminal law and is often used by a judge to try cases such as murder, where a perpetrator is subject to varying degrees of responsibility as a result of one's actions. Document 4::: Flood mitigation Flood control (or flood mitigation or flood protection or flood alleviation) methods are used to reduce or prevent the detrimental effects of flood waters. Flood relief methods are used to reduce the effects of flood waters or high water levels. Flooding can be caused by a mix of both natural processes, such as extreme weather upstream, and human changes to waterbodies and runoff. A distinction is made between structural and non-structural flood control measures. Document 5::: Environmental mitigation Debits occur in situations where a natural resource has been destroyed or severely impaired and credits are given in situations where a natural resource has been deemed to be improved or preserved. Therefore, when an entity such as a business or individual has a "debit" they are required to purchase a "credit". In some cases credits are bought from "mitigation banks" which are large mitigation projects established to provide credit to multiple parties in advance of development when such compensation cannot be achieved at the development site or is not seen as beneficial to the environment. Crediting systems can allow credit to be generated in different ways. For example, in the United States, projects are valued based on what the intentions of the project are which may be to preserve, enhance, restore or create (PERC) a natural resource.
epfl-collab
Given this program snippet which is part of a large (> 10000 LoC) codebase, which of these statements are true, given that the contents of string "s" are attacker controlled, the attacker can run the function f only once, the attacker has access to the binary and the binary is compiled for x86\_64 on a modern Linux system? \begin{lstlisting}[language=C,style=c] #include <string.h> void f(char* s) { char b[100] = {0}; memcpy(b, s, strlen(s)); printf("\%s", b); } \end{lstlisting}
['If this program is compiled with no mitigations, an attacker can\n gain remote code execution.', 'If this program is compiled with DEP (Data-Execution Prevention)\n and no other mitigation, an attacker can gain remote code execution.', 'If this program is compiled with stack canaries and no other\n mitigation, an attacker can reliably gain remote code execution.', 'If this program is compiled with stack canaries and no other\n mitigation, an attacker can leak the canary.']
A
null
Document 1::: C string The C programming language has a set of functions implementing operations on strings (character strings and byte strings) in its standard library. Various operations, such as copying, concatenation, tokenization and searching are supported. For character strings, the standard library uses the convention that strings are null-terminated: a string of n characters is represented as an array of n + 1 elements, the last of which is a "NUL character" with numeric value 0. The only support for strings in the programming language proper is that the compiler translates quoted string constants into null-terminated strings. Document 2::: Stack smashing If the affected program is running with special privileges, or accepts data from untrusted network hosts (e.g. a webserver) then the bug is a potential security vulnerability. If the stack buffer is filled with data supplied from an untrusted user then that user can corrupt the stack in such a way as to inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods for attackers to gain unauthorized access to a computer. Document 3::: Heap corruption Using memory beyond the memory that was allocated (buffer overflow): If an array is used in a loop, with incorrect terminating condition, memory beyond the array bounds may be accidentally manipulated. Buffer overflow is one of the most common programming flaws exploited by computer viruses, causing serious computer security issues (e.g. return-to-libc attack, stack-smashing protection) in widely used programs. In some cases programs can also incorrectly access the memory before the start of a buffer. Faulty heap memory management: Memory leaks and freeing non-heap or un-allocated memory are the most frequent errors caused by faulty heap memory management.Many memory debuggers such as Purify, Valgrind, Insure++, Parasoft C/C++test, AddressSanitizer are available to detect memory corruption errors. Document 4::: Heap overflow Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc metadata) and uses the resulting pointer exchange to overwrite a program function pointer. For example, on older versions of Linux, two buffers allocated next to each other on the heap could result in the first buffer overwriting the second buffer's metadata. Document 5::: Data pointer Because pointers allow both protected and unprotected access to memory addresses, there are risks associated with using them, particularly in the latter case. Primitive pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" such a pointer whose value is not a valid memory address could cause a program to crash (or contain invalid data). To alleviate this potential problem, as a matter of type safety, pointers are considered a separate type parameterized by the type of data they point to, even if the underlying representation is an integer. Other measures may also be taken (such as validation & bounds checking), to verify that the pointer variable contains a value that is both a valid memory address and within the numerical range that the processor is capable of addressing.
epfl-collab
In x86-64 Linux, the canary is \textbf{always} different for every?
['Namespace', 'Thread', 'Function', 'Process']
B
null
Document 1::: Intel 64 x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit version of the x86 instruction set, first released in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode. With 64-bit mode and the new paging mode, it supports vastly larger amounts of virtual memory and physical memory than was possible on its 32-bit predecessors, allowing programs to store larger amounts of data in memory. Document 2::: Universal Binary The universal binary format is a format for executable files that run natively on either PowerPC or Intel-manufactured IA-32 or Intel 64 or ARM64-based Macintosh computers. The format originated on NeXTStep as "Multi-Architecture Binaries", and the concept is more generally known as a fat binary, as seen on Power Macintosh. With the release of Mac OS X Snow Leopard, and before that, since the move to 64-bit architectures in general, some software publishers such as Mozilla have used the term "universal" to refer to a fat binary that includes builds for both i386 (32-bit Intel) and x86_64 systems. The same mechanism that is used to select between the PowerPC or Intel builds of an application is also used to select between the 32-bit or 64-bit builds of either PowerPC or Intel architectures. Document 3::: Unicode Standard Unicode, formally The Unicode Standard, is a text encoding standard maintained by the Unicode Consortium designed to support the use of text written in all of the world's major writing systems. Version 15 of the standard defines 149186 characters and 161 scripts used in various ordinary, literary, academic, and technical contexts. Many common characters, including numerals, punctuation, and other symbols, are unified within the standard and are not treated as specific to any given writing system. Unicode encodes thousands of emoji, with the continued development thereof conducted by the Consortium as a part of the standard. Document 4::: C alternative tokens C alternative tokens refer to a set of alternative spellings of common operators in the C programming language. They are implemented as a group of macro constants in the C standard library in the iso646.h header. The tokens were created by Bjarne Stroustrup for the pre-standard C++ language and were added to the C standard in a 1995 amendment to the C90 standard via library to avoid the breakage of existing code. The alternative tokens allow programmers to use C language bitwise and logical operators which could otherwise be hard to type on some international and non-QWERTY keyboards. The name of the header file they are implemented in refers to the ISO/IEC 646 standard, a 7-bit character set with a number of regional variations, some of which have accented characters in place of the punctuation marks used by C operators. Document 5::: Universal Binary Universal binaries typically include both PowerPC and x86 versions of a compiled application. The operating system detects a universal binary by its header, and executes the appropriate section for the architecture in use. This allows the application to run natively on any supported architecture, with no negative performance impact beyond an increase in the storage space taken up by the larger binary.
epfl-collab
Which of the following in Linux x86-64 assembly snippets can be used as a gadget AND can be chained with more gadgets (e.g., in a ROP/JOP chain)?
['\\texttt{pop rbx; pop rax; jmp rax}', '\\texttt{pop rbx; pop rax; ret}', '\\texttt{xor rbx, rbx; xor rbx, -1; push rbx; ret}', '\\texttt{mov eax, -1; call rax}']
A
null
Document 1::: X86 assembly language Like all assembly languages, x86 assembly uses mnemonics to represent fundamental CPU instructions, or machine code. Assembly languages are most often used for detailed and time-critical applications such as small real-time embedded systems, operating-system kernels, and device drivers, but can also be used for other applications. A compiler will sometimes produce assembly code as an intermediate step when translating a high-level program into machine code. Document 2::: IO address An example of the latter is found in the Commodore 64, which uses a form of memory mapping to cause RAM or I/O hardware to appear in the 0xD000-0xDFFF range. Port-mapped I/O often uses a special class of CPU instructions designed specifically for performing I/O, such as the in and out instructions found on microprocessors based on the x86 architecture. Different forms of these two instructions can copy one, two or four bytes (outb, outw and outl, respectively) between the EAX register or one of that register's subdivisions on the CPU and a specified I/O port address which is assigned to an I/O device. I/O devices have a separate address space from general memory, either accomplished by an extra "I/O" pin on the CPU's physical interface, or an entire bus dedicated to I/O. Because the address space for I/O is isolated from that for main memory, this is sometimes referred to as isolated I/O. Document 3::: XOP instruction set The XOP (eXtended Operations) instruction set, announced by AMD on May 1, 2009, is an extension to the 128-bit SSE core instructions in the x86 and AMD64 instruction set for the Bulldozer processor core, which was released on October 12, 2011. However AMD removed support for XOP from Zen (microarchitecture) onward.The XOP instruction set contains several different types of vector instructions since it was originally intended as a major upgrade to SSE. Most of the instructions are integer instructions, but it also contains floating point permutation and floating point fraction extraction instructions. See the index for a list of instruction types. Document 4::: GNU Assembler The GNU Assembler, commonly known as gas or as, is the assembler developed by the GNU Project. It is the default back-end of GCC. It is used to assemble the GNU operating system and the Linux kernel, and various other software. It is a part of the GNU Binutils package. Document 5::: X86 assembly language x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors. Regarded as a programming language, assembly is machine-specific and low-level.
epfl-collab
What is the difference between C++'s \texttt{static\_cast} and \texttt{dynamic\_cast}?
['\\texttt{static\\_cast} is faster but less safe than \\texttt{dynamic\\_cast}.', '\\texttt{static\\_cast} does not work on already-casted objects, while \\texttt{dynamic\\_cast} works always.', '\\texttt{static\\_cast} can only be applied to static classes whereas \\texttt{dynamic\\_cast} works for any class.', '\\texttt{static\\_cast} does not perform any kind of runtime check, while \\texttt{dynamic\\_cast} performs runtime checks on the validity of the cast.']
D
null
Document 1::: Operator precedence in C C++ also contains the type conversion operators const_cast, static_cast, dynamic_cast, and reinterpret_cast. The formatting of these operators means that their precedence level is unimportant. Most of the operators available in C and C++ are also available in other C-family languages such as C#, D, Java, Perl, and PHP with the same precedence, associativity, and semantics. Document 2::: Virtual function table In computer programming, a virtual method table (VMT), virtual function table, virtual call table, dispatch table, vtable, or vftable is a mechanism used in a programming language to support dynamic dispatch (or run-time method binding). Whenever a class defines a virtual function (or method), most compilers add a hidden member variable to the class that points to an array of pointers to (virtual) functions called the virtual method table. These pointers are used at runtime to invoke the appropriate function implementations, because at compile time it may not yet be known if the base function is to be called or a derived one implemented by a class that inherits from the base class. There are many different ways to implement such dynamic dispatch, but use of virtual method tables is especially common among C++ and related languages (such as D and C#). Document 3::: Virtual function table When the program calls the speak function on a Cat reference (which can refer to an instance of Cat, or an instance of HouseCat or Lion), the code must be able to determine which implementation of the function the call should be dispatched to. This depends on the actual class of the object, not the class of the reference to it (Cat). The class cannot generally be determined statically (that is, at compile time), so neither can the compiler decide which function to call at that time. The call must be dispatched to the right function dynamically (that is, at run time) instead. Document 4::: Static variable In computer programming, a static variable is a variable that has been allocated "statically", meaning that its lifetime (or "extent") is the entire run of the program. This is in contrast to shorter-lived automatic variables, whose storage is stack allocated and deallocated on the call stack; and in contrast to objects, whose storage is dynamically allocated and deallocated in heap memory. Variable lifetime is contrasted with scope (where a variable can be used): "global" and "local" refer to scope, not lifetime, but scope often implies lifetime. In many languages, global variables are always static, but in some languages they are dynamic, while local variables are generally automatic, but may be static. In general, static memory allocation is the allocation of memory at compile time, before the associated program is executed, unlike dynamic memory allocation or automatic memory allocation where memory is allocated as required at run time. Document 5::: Run-time type information In computer programming, run-time type information or run-time type identification (RTTI) is a feature of some programming languages (such as C++, Object Pascal, and Ada) that exposes information about an object's data type at runtime. Run-time type information may be available for all types or only to types that explicitly have it (as is the case with Ada). Run-time type information is a specialization of a more general concept called type introspection. In the original C++ design, Bjarne Stroustrup did not include run-time type information, because he thought this mechanism was often misused.
epfl-collab
Once software has been shipped, what does the Software Development Lifecycle require you to do to maintain security guarantees?
['Provide new features to attract new users', 'Deploy updates timely and safely', 'Ensure the software works on newer machines', 'Track the evolution of third party dependencies']
D
null
Document 1::: Software quality Software Assurance (SA) covers both the property and the process to achieve it: confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures Document 2::: Web operations After engineering had built a software product, and QA had verified it as correct, it would be handed to a support staff to operate the working software. Such a view assumed that software was mostly immutable in production and that usage would be mostly stable. Increasingly, "a web application involves many specialists, but it takes people in web ops to ensure that everything works together throughout an application's lifetime." Document 3::: Software verification and validation In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance. Ideally, formal methods provide a mathematical guarantee that software meets its specifications. Building the product right implies the use of the Requirements Specification as input for the next phase of the development process, the design process, the output of which is the Design Specification. Document 4::: Software verification and validation In other words, software verification ensures that the output of each phase of the software development process effectively carry out what its corresponding input artifact specifies (requirement -> design -> software product), while software validation ensures that the software product meets the needs of all the stakeholders (therefore, the requirement specification was correctly and accurately expressed in the first place). Software verification ensures that "you built it right" and confirms that the product, as provided, fulfills the plans of the developers. Software validation ensures that "you built the right thing" and confirms that the product, as provided, fulfills the intended use and goals of the stakeholders. Document 5::: Software Methodologies In software engineering, a software development process is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. It is also known as a software development life cycle (SDLC).
epfl-collab
You share an apartment with friends. Kitchen, living room, balcony, and bath room are shared resources among all parties. Which policy/policies violate(s) the principle of least privilege?
['Different bedrooms do not have a different key.', "Nobody has access to the neighbor's basement.", 'To access the kitchen you have to go through the living room.', 'There is no lock on the fridge.']
A
null
Document 1::: Fair division among groups In each department there are several faculty members, with differing opinions about which rooms are better. Two neighboring countries want to divide a disputed region among them. Document 2::: House allocation problem Pareto efficiency (PE) - no other allocation is better for some agents and not worse to all agents. Fairness - can be defined in various ways, for example, envy-freeness (EF) - no agent should envy another agent. Strategyproofness (SP) - each agent has an incentive to report his/her true preferences to the algorithm. Individual rationality (IR) - no agent should lose from participating in the algorithm. Document 3::: Online fair division Again, each donation must be allocated immediately, and it is not known when and what future donations will be.Some situations in which not all participants are available include: Dividing a cake among people in a party. Some people come early and want to get a cake when they arrive, but other people may come later. Dividing the rent and rooms among tenants in a rented apartment, when one or more of them are not available during the allocation.The online nature of the problem requires different techniques and fairness criteria than in the classic, offline fair division. Document 4::: Principle of excluded middle In logic, the law of excluded middle (or the principle of excluded middle) states that for every proposition, either this proposition or its negation is true. It is one of the so-called three laws of thought, along with the law of noncontradiction, and the law of identity. However, no system of logic is built on just these laws, and none of these laws provides inference rules, such as modus ponens or De Morgan's laws. The law is also known as the law (or principle) of the excluded third, in Latin principium tertii exclusi. Document 5::: Least-upper-bound property In mathematics, the least-upper-bound property (sometimes called completeness or supremum property or l.u.b. property) is a fundamental property of the real numbers. More generally, a partially ordered set X has the least-upper-bound property if every non-empty subset of X with an upper bound has a least upper bound (supremum) in X. Not every (partially) ordered set has the least upper bound property. For example, the set Q {\displaystyle \mathbb {Q} } of all rational numbers with its natural order does not have the least upper bound property.
epfl-collab
Which of the following statement(s) is/are correct?
['An attacker-controlled format string can lead to arbitrary write.', 'An information leak can be a preparation step of control-flow hijacking.', 'When constructing a ROP payload, we use gadgets from all currently running processes', 'In format strings, \\%n prints a hex value']
B
null
Document 1::: Statement (logic) In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence). Document 2::: Statement (logic) In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence). Document 3::: Formal logic Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. Document 4::: Characterization theorem Common mathematical expressions for a characterization of X in terms of P include "P is necessary and sufficient for X", and "X holds if and only if P". It is also common to find statements such as "Property Q characterizes Y up to isomorphism". The first type of statement says in different words that the extension of P is a singleton set, while the second says that the extension of Q is a single equivalence class (for isomorphism, in the given example — depending on how up to is being used, some other equivalence relation might be involved). Document 5::: Atomic fact In logic and analytic philosophy, an atomic sentence is a type of declarative sentence which is either true or false (may also be referred to as a proposition, statement or truthbearer) and which cannot be broken down into other simpler sentences. For example, "The dog ran" is an atomic sentence in natural language, whereas "The dog ran and the cat hid" is a molecular sentence in natural language. From a logical analysis point of view, the truth or falsity of sentences in general is determined by only two things: the logical form of the sentence and the truth or falsity of its simple sentences. This is to say, for example, that the truth of the sentence "John is Greek and John is happy" is a function of the meaning of "and", and the truth values of the atomic sentences "John is Greek" and "John is happy".
epfl-collab
Consider the following shellcode, which of the following statement(s) is/are correct? \begin{lstlisting}[language=nasm,style=nasm] needle: jmp gofar goback: pop %rdi xor %rax, %rax movb $0x3b, %al xor %rsi, %rsi xor %rdx, %rdx syscall gofar: call goback .string "/bin/sh" \end{lstlisting}
['Lines 2-6 are preparing arguments for the syscall invocation.', 'In the exploit payload, the string "/bin/sh" must end with a "0x0" byte to ensure it is terminated correctly.', 'Line 3 is not necessary.', 'The purpose of line 8 is to push the address of "/bin/sh" to the stack and jump to line 2.']
A
null
Document 1::: Brace expansion Like most Unix shells, it supports filename globbing (wildcard matching), piping, here documents, command substitution, variables, and control structures for condition-testing and iteration. The keywords, syntax, dynamically scoped variables and other basic features of the language are all copied from sh. Other features, e.g., history, are copied from csh and ksh. Document 2::: Alphanumeric executable In hacking, a shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient. However, attempts at replacing the term have not gained wide acceptance. Document 3::: Alphanumeric executable Shellcode is commonly written in machine code. When creating shellcode, it is generally desirable to make it both small and executable, which allows it to be used in as wide a variety of situations as possible. Writing good shellcode can be as much an art as it is a science. In assembly code, the same function can be performed in a multitude of ways and there is some variety in the lengths of opcodes that can be used for this purpose; good shellcode writers can put these small opcodes to use to create more compact shellcode. Some have reached the smallest possible size while maintaining stability. Document 4::: Qshell Qshell is an optional command-line interpreter (shell) for the IBM i operating system. Qshell is based on POSIX and X/Open standards. It is a Bourne-like shell that also includes features of KornShell. The utilities (or commands) are external programs that provide additional functions. The development team of Qshell had to deal with platform-specific issues such as translating between ASCII and EBCDIC. The shell supports interactive mode as well as batch processing and can run shell scripts from Unix-like operating systems with few or no modifications. Document 5::: Program segment prefix Either function will return the PSP address in register BX.Alternatively, in .COM programs loaded at offset 100h, one can address the PSP directly just by using the offsets listed above. Offset 000h points to the beginning of the PSP, 0FFh points to the end, etc. For example, the following code displays the command line arguments: In DOS 1.x, it was necessary for the CS (Code Segment) register to contain the same segment as the PSP at program termination, thus standard programming practice involved saving the DS register (since the DS register is loaded with the PSP segment) along with a zero word to the stack at program start and terminating the program with a RETF instruction, which would pop the saved segment value off the stack and jump to address 0 of the PSP, which contained an INT 20h instruction. If the executable was a .COM file, this procedure was unnecessary and the program could be terminated merely with a direct INT 20h instruction or else calling INT 21h function 0. However, the programmer still had to ensure that the CS register contained the segment address of the PSP at program termination. Thus, In DOS 2.x and higher, program termination was accomplished instead with INT 21h function 4Ch which did not require the CS register to contain the segment value of the PSP.
epfl-collab
Which of the following statement(s) is/are true about Safe Exception Handling (SEH)?
['The implementation of SEH is compiler specific.', 'SEH is a defense that protects C/C++ programs against control-flow hijack attacks through changing exception data structures.', 'Neither SafeSEH nor SeHOP checks the order and number of exception handlers.', 'SafeSEH provides stronger protection than SeHOP.']
C
null
Document 1::: Exception handling syntax Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it. Most commonly, error handling uses a try... block, and errors are created via a throw statement, but there is significant variation in naming and syntax. Document 2::: Object modeling Exceptions provide a clean way to deal with error conditions without complicating the code. A block of code may be defined to throw an exception whenever particular unexpected conditions or errors arise. This means that control passes to another block of code that catches the exception. Document 3::: Segmentation violation In computing, a segmentation fault (often shortened to segfault) or access violation is a fault, or failure condition, raised by hardware with memory protection, notifying an operating system (OS) the software has attempted to access a restricted area of memory (a memory access violation). On standard x86 computers, this is a form of general protection fault. The operating system kernel will, in response, usually perform some corrective action, generally passing the fault on to the offending process by sending the process a signal. Document 4::: Object modeling An invocation can include additional information needed to carry out the method. The receiver executes the appropriate method and then returns control to the invoking object, sometimes supplying a result.Exceptions Programs can encounter various errors and unexpected conditions of varying seriousness. During the execution of the method many different problems may be discovered. Document 5::: Null-pointer safety Void safety (also known as null safety) is a guarantee within an object-oriented programming language that no object references will have null or void values. In object-oriented languages, access to objects is achieved through references (or, equivalently, pointers). A typical call is of the form: x.f(a, ...) where f denotes an operation and x denotes a reference to some object.
epfl-collab
Which of the following statement(s) is/are true about CFI?
['When producing valid target sets, missing a legitimate target is unacceptable.', 'Keeping the overhead of producing valid target sets as low as possible is crucial for a CFI mechanism.', 'CFI’s checks of the valid target set are insufficient to protect every forward edge control-flow transfer', 'CFI prevents attackers from exploiting memory corruptions.']
A
null
Document 1::: Common Flash Memory Interface The Common Flash Memory Interface (CFI) is an open standard jointly developed by AMD, Intel, Sharp and Fujitsu. It is implementable by all flash memory vendors, and has been approved by the non-volatile-memory subcommittee of JEDEC. The goal of the specification is the interchangeability of flash memory devices offered by different vendors. The developer is able to use one driver for different flash products by reading identifying information from the flash chip. Document 2::: Controlled flight into terrain In aviation, a controlled flight into terrain (CFIT; usually SEE-fit) is an accident in which an airworthy aircraft, fully under pilot control, is unintentionally flown into the ground, a mountain, a body of water or an obstacle. In a typical CFIT scenario, the crew is unaware of the impending disaster until it is too late. The term was coined by engineers at Boeing in the late 1970s.Accidents where the aircraft is out of control at the time of impact, because of mechanical failure or pilot error, are not considered CFIT (they are known as uncontrolled flight into terrain or UFIT), nor are incidents resulting from the deliberate action of the person at the controls, such as acts of terrorism or suicide by pilot. According to Boeing in 1997, CFIT was a leading cause of airplane accidents involving the loss of life, causing over 9,000 deaths since the beginning of the commercial jet aircraft. CFIT was identified as a cause of 25% of USAF Class A mishaps between 1993 and 2002. According to data collected by the International Air Transport Association (IATA) between 2008 and 2017, CFITs accounted for six percent of all commercial aircraft accidents, and was categorized as "the second-highest fatal accident category after Loss of Control Inflight (LOCI)." Document 3::: CFI3 Fluorotriiodomethane is a chemical compound and methane derivative with the chemical formula CFI3. == References == Document 4::: CFM LEAP The CFM International LEAP ("Leading Edge Aviation Propulsion") is a high-bypass turbofan engine produced by CFM International, a 50–50 joint venture between American GE Aerospace (formerly GE Aviation) and French Safran Aircraft Engines (formerly Snecma). It is the successor of the CFM56 and competes with the Pratt & Whitney PW1000G to power narrow-body aircraft. Document 5::: Flight information service A flight information service (FIS) is a form of air traffic service which is available to any aircraft within a flight information region (FIR), as agreed internationally by ICAO. It is defined as information pertinent to the safe and efficient conduct of flight, and includes information on other potentially conflicting traffic, possibly derived from radar, but stopping short of providing positive separation from that traffic. Flight Information also includes: Meteorological information Information on aerodromes Information on possible hazards to flightFIS shall be provided to all aircraft which are provided with any air traffic control (ATC) service or are otherwise known to air traffic service units. All air traffic service units will provide an FIS to any aircraft, in addition to their other tasks.
epfl-collab
Assume we enforce CFI for function returns. Which of the following statements are true?
['CFI on returns will make control-flow hijacking harder', 'CFI on returns ensures that only the single valid target is allowed', 'CFI on returns is too coarse-grained and may give the adversary sufficient valid targets for an exploit', 'CFI on returns cannot support exceptions']
A
null
Document 1::: Control flow analysis In computer science, control-flow analysis (CFA) is a static-code-analysis technique for determining the control flow of a program. The control flow is expressed as a control-flow graph (CFG). For both functional programming languages and object-oriented programming languages, the term CFA, and elaborations such as k-CFA, refer to specific algorithms that compute control flow.For many imperative programming languages, the control flow of a program is explicit in a program's source code. Document 2::: Return value optimization In general, the C++ standard allows a compiler to perform any optimization, provided the resulting executable exhibits the same observable behaviour as if (i.e. pretending) all the requirements of the standard have been fulfilled. This is commonly referred to as the "as-if rule". The term return value optimization refers to a special clause in the C++ standard that goes even further than the "as-if" rule: an implementation may omit a copy operation resulting from a return statement, even if the copy constructor has side effects.The following example demonstrates a scenario where the implementation may eliminate one or both of the copies being made, even if the copy constructor has a visible side effect (printing text). The first copy that may be eliminated is the one where a nameless temporary C could be copied into the function f's return value. Document 3::: Return statement In computer programming, a return statement causes execution to leave the current subroutine and resume at the point in the code immediately after the instruction which called the subroutine, known as its return address. The return address is saved by the calling routine, today usually on the process's call stack or in a register. Return statements in many programming languages allow a function to specify a return value to be passed back to the code that called the function. Document 4::: Common Flash Memory Interface The Common Flash Memory Interface (CFI) is an open standard jointly developed by AMD, Intel, Sharp and Fujitsu. It is implementable by all flash memory vendors, and has been approved by the non-volatile-memory subcommittee of JEDEC. The goal of the specification is the interchangeability of flash memory devices offered by different vendors. The developer is able to use one driver for different flash products by reading identifying information from the flash chip. Document 5::: Strict function This function is strict in its first parameter, since the function must know whether its first argument evaluates to true or to false before it can return; but it is non-strict in its second parameter, because (for example) if(false, ⊥ {\displaystyle \perp } ,1) = 1, as well as non-strict in its third parameter, because (for example) if(true,2, ⊥ {\displaystyle \perp } ) = 2. However, it is jointly strict in its second and third parameters, since if(true, ⊥ {\displaystyle \perp } , ⊥ {\displaystyle \perp } ) = ⊥ {\displaystyle \perp } and if(false, ⊥ {\displaystyle \perp } , ⊥ {\displaystyle \perp } ) = ⊥ {\displaystyle \perp } . In a non-strict functional programming language, strictness analysis refers to any algorithm used to prove the strictness of a function with respect to one or more of its arguments. Such functions can be compiled to a more efficient calling convention, such as call by value, without changing the meaning of the enclosing program.
epfl-collab
Which of the following statements about mitigations are true?
['No mitigation requires hardware support to be implemented', 'The performance of certain mitigations depends on underlying architecture features (e.g., i386 versus x86-64)', 'The bug remains in the application, mitigations simply make exploitation harder', 'All mitigations fully stop an attack vector']
C
null
Document 1::: Mitigation Mitigation is the reduction of something harmful or the reduction of its harmful effects. It may refer to measures taken to reduce the harmful effects of hazards that remain in potentia, or to manage harmful incidents that have already occurred. It is a stage or component of emergency management and of risk management. The theory of mitigation is a frequently used element in criminal law and is often used by a judge to try cases such as murder, where a perpetrator is subject to varying degrees of responsibility as a result of one's actions. Document 2::: Environmental mitigation Environmental mitigation, compensatory mitigation, or mitigation banking, are terms used primarily by the United States government and the related environmental industry to describe projects or programs intended to offset known impacts to an existing historic or natural resource such as a stream, wetland, endangered species, archeological site, paleontological site or historic structure. To "mitigate" means to make less harsh or hostile. Environmental mitigation is typically a part of an environmental crediting system established by governing bodies which involves allocating debits and credits. Document 3::: Risk mitigation Mitigation is the reduction of something harmful or the reduction of its harmful effects. It may refer to measures taken to reduce the harmful effects of hazards that remain in potentia, or to manage harmful incidents that have already occurred. It is a stage or component of emergency management and of risk management. The theory of mitigation is a frequently used element in criminal law and is often used by a judge to try cases such as murder, where a perpetrator is subject to varying degrees of responsibility as a result of one's actions. Document 4::: Environmental mitigation Debits occur in situations where a natural resource has been destroyed or severely impaired and credits are given in situations where a natural resource has been deemed to be improved or preserved. Therefore, when an entity such as a business or individual has a "debit" they are required to purchase a "credit". In some cases credits are bought from "mitigation banks" which are large mitigation projects established to provide credit to multiple parties in advance of development when such compensation cannot be achieved at the development site or is not seen as beneficial to the environment. Crediting systems can allow credit to be generated in different ways. For example, in the United States, projects are valued based on what the intentions of the project are which may be to preserve, enhance, restore or create (PERC) a natural resource. Document 5::: Flood mitigation Flood control (or flood mitigation or flood protection or flood alleviation) methods are used to reduce or prevent the detrimental effects of flood waters. Flood relief methods are used to reduce the effects of flood waters or high water levels. Flooding can be caused by a mix of both natural processes, such as extreme weather upstream, and human changes to waterbodies and runoff. A distinction is made between structural and non-structural flood control measures.
epfl-collab
When a test fails, it means that:
['either the program under test or the test itself has a bug, or both.', 'the test is incorrect.', 'the program under test has a bug.', 'that both the program and the test have a bug.']
A
null
Document 1::: High-stakes test A high-stakes test is a test with important consequences for the test taker. Passing has important benefits, such as a high school diploma, a scholarship, or a license to practice a profession. Failing has important disadvantages, such as being forced to take remedial classes until the test can be passed, not being allowed to drive a car, or difficulty finding employment. The use and misuse of high-stakes tests is a controversial topic in public education, especially in the United States and U.K., where they have become especially popular in recent years, used not only to assess school-age students but in attempts to increase teacher accountability. Document 2::: Test method ", as well as effective and reproducible.A test can be considered an observation or experiment that determines one or more characteristics of a given sample, product, process, or service. The purpose of testing involves a prior determination of expected observation and a comparison of that expectation to what one actually observes. The results of testing can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument. Usually the test result is the dependent variable, the measured response based on the particular conditions of the test or the level of the independent variable. Some tests, however, may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. Document 3::: Test method A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a definitive procedure that produces a test result. In order to ensure accurate and relevant test results, a test method should be "explicit, unambiguous, and experimentally feasible. Document 4::: Design for testing The tests are generally driven by test programs that execute using automatic test equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure. Document 5::: Size (statistics) In statistics, the size of a test is the probability of falsely rejecting the null hypothesis. That is, it is the probability of making a type I error. It is denoted by the Greek letter α (alpha).
epfl-collab
Tick all correct answers:
["Fuzz testing scales at least to 1'000s of lines of code.", "Formal verification scales at least upto 100'000s of lines of code.", 'Formal verification and concolic execution scale to the same extent.', 'Compiler warnings scale to millions lines of code.']
A
null
Document 1::: Multiple choice questions Multiple choice (MC), objective response or MCQ (for multiple choice question) is a form of an objective assessment in which respondents are asked to select only correct answers from the choices offered as a list. The multiple choice format is most frequently used in educational testing, in market research, and in elections, when a person chooses between multiple candidates, parties, or policies. Although E. L. Thorndike developed an early scientific approach to testing students, it was his assistant Benjamin D. Wood who developed the multiple-choice test. Document 2::: Quizlet Quizlet is a multi-national American company that provides tools for studying and learning. It was founded in October 2005 by Andrew Sutherland, who at the time was a 15-year old student, and released to the public in January 2007. Quizlet's primary products include digital flash cards, matching games, practice electronic assessments, and live quizzes. In 2017, 1 in 2 high school students used Quizlet. As of December 2021, Quizlet has over 500 million user-generated flashcard sets and more than 60 million active users. Document 3::: Quizlet Quizlet is a multi-national American company that provides tools for studying and learning. It was founded in October 2005 by Andrew Sutherland, who at the time was a 15-year old student, and released to the public in January 2007. Quizlet's primary products include digital flash cards, matching games, practice electronic assessments, and live quizzes. In 2017, 1 in 2 high school students used Quizlet. As of December 2021, Quizlet has over 500 million user-generated flashcard sets and more than 60 million active users. Document 4::: Programmed learning After each step, learners are given a question to test their comprehension. Then immediately the correct answer is shown. Document 5::: Yale shooting problem This is the expected solution. It contains two fluent changes: l o a d e d {\displaystyle loaded} becomes true at time 1 and a l i v e {\displaystyle alive} becomes false at time 3. The following evaluation also satisfies all formulae above.
epfl-collab
Which of the following statement(s) is/are true about different types of coverage for coverage-guided fuzzing?
['Full data flow coverage is easier to obtain than full edge coverage', 'If you cover all edges, you also cover all blocks', 'Full line/statement coverage means that every possible\n control flow through the target has been covered', 'Full edge coverage is equivalent to full path coverage\n because every possible basic block transition has been covered']
B
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Intelligent verification Automatically tracking paths through design structure to coverage points, to create new tests. Ensuring that various aspects of the design are only verified once in the same test sets. Scaling the test automatically for different hardware and software configurations of a system. Support for different verification methodologies like constrained random, directed, graph-based, use-case based in the same tool. "Intelligent Verification" uses existing logic simulation testbenches, and automatically targets and maximizes the following types of design coverage: Code coverage Branch coverage Expression coverage Functional coverage Assertion coverage Document 3::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 4::: Intelligent verification With automated coverage feedback, the test description is automatically adjusted to target design functionality that has not been previously verified (or "covered") by other tests existing tests. A key property of automated coverage feedback is that, given the same test environment, the software will automatically change the tests to improve functional design coverage in response to changes in the design. Newer intelligent verification tools are able to derive the essential functions one would expect of a testbench (stimulus, coverage, and checking) from a single, compact, high-level model. Document 5::: Modified Condition/Decision Coverage Condition coverage Every condition in a decision in the program has taken all possible outcomes at least once. Decision coverage Every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken all possible outcomes at least once. Condition/decision coverage Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, and every decision in the program has taken all possible outcomes at least once.
epfl-collab
Which of the following is/are true about fuzzing?
['In structure-aware fuzzing, the mutator should only generate\n inputs that comply with all the format rules.', 'Black box fuzzing may struggle to find inputs that reach deep into the program.', 'The quality of initial seeds matters in mutational fuzzing.', 'Fuzzing is complete as soon as all code is covered.']
B
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 3::: FuzzyCLIPS The system uses two basic inexact concepts, fuzziness and uncertainty. It has provided a useful environment for developing fuzzy applications but it does require significant effort to update and maintain as new versions of CLIPS are released. Document 4::: Fuzzy extractor So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than ϵ {\displaystyle \epsilon } . As long as this distance is sufficiently small, the security will remain adequate. Document 5::: Fuzzy extractor An ( m , l , t , ϵ ) {\displaystyle (m,l,t,\epsilon )} fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that: (1) Gen, given w ∈ M {\displaystyle w\in \mathbb {M} } , outputs an extracted string R ∈ { 0 , 1 } l {\displaystyle R\in {\mathbb {\{} 0,1\}^{l}}} and a helper string P ∈ { 0 , 1 } ∗ {\displaystyle P\in {\mathbb {\{} 0,1\}^{*}}} . (2) Correctness: If d i s ( w , w ′ ) ≤ t {\displaystyle dis(w,w')\leq t} and ( R , P ) ← G e n ( w ) {\displaystyle (R,P)\leftarrow Gen(w)} , then R e p ( w ′ , P ) = R {\displaystyle Rep(w',P)=R} . (3) Security: For all m-sources W {\displaystyle W} over M {\displaystyle M} , the string R {\displaystyle R} is nearly uniform, even given P {\displaystyle P} . So, when H ~ ∞ ( W | E ) ≥ m {\displaystyle {\tilde {H}}_{\mathrm {\infty } }(W|E)\geq m} , then ( R , P , E ) ≈ ( U l , P , E ) {\displaystyle (R,P,E)\approx (U_{\mathrm {l} },P,E)} .
epfl-collab
Which of the following is/are true about fuzzing?
['The efficacy of a fuzzing campaign scales with its speed (executions per second)', "Fuzzers may get ``stuck'' and cannot easily detect that they are\n no longer improving coverage", 'There is little to no benefit in running fuzzers in parallel.', 'Fuzzers generally determine the exploitability of a crash.']
B
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 3::: FuzzyCLIPS The system uses two basic inexact concepts, fuzziness and uncertainty. It has provided a useful environment for developing fuzzy applications but it does require significant effort to update and maintain as new versions of CLIPS are released. Document 4::: Fuzzy extractor So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than ϵ {\displaystyle \epsilon } . As long as this distance is sufficiently small, the security will remain adequate. Document 5::: Fuzzy extractor An ( m , l , t , ϵ ) {\displaystyle (m,l,t,\epsilon )} fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that: (1) Gen, given w ∈ M {\displaystyle w\in \mathbb {M} } , outputs an extracted string R ∈ { 0 , 1 } l {\displaystyle R\in {\mathbb {\{} 0,1\}^{l}}} and a helper string P ∈ { 0 , 1 } ∗ {\displaystyle P\in {\mathbb {\{} 0,1\}^{*}}} . (2) Correctness: If d i s ( w , w ′ ) ≤ t {\displaystyle dis(w,w')\leq t} and ( R , P ) ← G e n ( w ) {\displaystyle (R,P)\leftarrow Gen(w)} , then R e p ( w ′ , P ) = R {\displaystyle Rep(w',P)=R} . (3) Security: For all m-sources W {\displaystyle W} over M {\displaystyle M} , the string R {\displaystyle R} is nearly uniform, even given P {\displaystyle P} . So, when H ~ ∞ ( W | E ) ≥ m {\displaystyle {\tilde {H}}_{\mathrm {\infty } }(W|E)\geq m} , then ( R , P , E ) ≈ ( U l , P , E ) {\displaystyle (R,P,E)\approx (U_{\mathrm {l} },P,E)} .
epfl-collab
Which of the following is/are true about fuzzing?
['Fuzzing open-source software allows the analyst to modify the\n target software to remove parts where the fuzzer might get stuck\n (such as checksums).', 'Fuzzing can only be applied to C/C++ programs.', 'When fuzzing open-source software, recompiling it with\n mitigations disabled will improve the fuzzing process.', 'Having too many initial seeds might harm fuzzing performance.']
D
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Document 3::: FuzzyCLIPS The system uses two basic inexact concepts, fuzziness and uncertainty. It has provided a useful environment for developing fuzzy applications but it does require significant effort to update and maintain as new versions of CLIPS are released. Document 4::: Fuzzy extractor So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than ϵ {\displaystyle \epsilon } . As long as this distance is sufficiently small, the security will remain adequate. Document 5::: Fuzzy extractor An ( m , l , t , ϵ ) {\displaystyle (m,l,t,\epsilon )} fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that: (1) Gen, given w ∈ M {\displaystyle w\in \mathbb {M} } , outputs an extracted string R ∈ { 0 , 1 } l {\displaystyle R\in {\mathbb {\{} 0,1\}^{l}}} and a helper string P ∈ { 0 , 1 } ∗ {\displaystyle P\in {\mathbb {\{} 0,1\}^{*}}} . (2) Correctness: If d i s ( w , w ′ ) ≤ t {\displaystyle dis(w,w')\leq t} and ( R , P ) ← G e n ( w ) {\displaystyle (R,P)\leftarrow Gen(w)} , then R e p ( w ′ , P ) = R {\displaystyle Rep(w',P)=R} . (3) Security: For all m-sources W {\displaystyle W} over M {\displaystyle M} , the string R {\displaystyle R} is nearly uniform, even given P {\displaystyle P} . So, when H ~ ∞ ( W | E ) ≥ m {\displaystyle {\tilde {H}}_{\mathrm {\infty } }(W|E)\geq m} , then ( R , P , E ) ≈ ( U l , P , E ) {\displaystyle (R,P,E)\approx (U_{\mathrm {l} },P,E)} .
epfl-collab
Which of the following is/are true about testing?
['Adequate code coverage is crucial for dynamic testing.', 'False positives matter in static analyses.', 'Tests are sufficient to prove that a program is bug-free.', 'Symbolic execution is a technique of whitebox dynamic testing.']
A
null
Document 1::: Test method ", as well as effective and reproducible.A test can be considered an observation or experiment that determines one or more characteristics of a given sample, product, process, or service. The purpose of testing involves a prior determination of expected observation and a comparison of that expectation to what one actually observes. The results of testing can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument. Usually the test result is the dependent variable, the measured response based on the particular conditions of the test or the level of the independent variable. Some tests, however, may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. Document 2::: Test Design A test condition is a statement about the test object. Test conditions can be stated for any part of a component or system that could be verified: functions, transactions, features, quality attributes or structural elements. The fundamental challenge of test design is that there are infinitely many different tests that you could run, but there is not enough time to run them all. A subset of tests must be selected; small enough to run, but well-chosen enough that the tests find bug and expose other quality-related information.Test design is one of the most important prerequisites of software quality. Document 3::: Correctness (computer science) The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality. Document 4::: Test Design In software engineering, test design is the activity of deriving and specifying test cases from test conditions to test software. Document 5::: Random testing Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail. In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing.
epfl-collab
Which of the following is/are true about fuzzing with sanitizers?
['The set of sanitizers used during a fuzzing campaign must be\n carefully chosen (tradeoff between bug visibility/execution\n speed).', 'ASAN instrumentation has a negligible startup overhead.', 'Some fuzzers dynamically tweak sanitizers to speed up fuzzing.', 'Some fuzzers use fork servers to reduce sanitizer overhead.']
D
null
Document 1::: Fault injection Robustness testing (also known as syntax testing, fuzzing or fuzz testing) is a type of fault injection commonly used to test for vulnerabilities in communication interfaces such as protocols, command line parameters, or APIs. The propagation of a fault through to an observable failure follows a well-defined cycle. When executed, a fault may cause an error, which is an invalid state within a system boundary. Document 2::: Antibacterial activity A clean and hygienic manufacturing environment is an essential prerequisite in order to keep contamination-related reject rates low. The utilization of surfaces in the manufacturing environment with antibacterial properties can significantly reduce contamination risks.The determination of the antibacterial activity (microbicidy) of surfaces is described in the following norms: ISO 22196 and JIS Z 2801. The Japanese norm JIS Z 2801 was published in 2000 and published again in 2007 as the internationally valid norm ISO 22196. Therefore, ISO 22196 and JIS Z 2801 are identical. In the test, both a surface system coated with sporicide and an identical surface system without an antibacterial coating are charged with selected microorganisms.A once-only assessment of the reduction factor is carried out after 24 hours by determining colony counts on the reference surface and on the antibacterial surface. Document 3::: Skin disinfection Skin disinfection is a process that involves the application of a disinfectant to reduce levels of microorganisms on the skin. Disinfecting the skin of the patient and the hands of the healthcare providers are an important part of surgery.Skin disinfection may be accomplished with a number of solutions including providone-iodine, chlorhexidine, alcohol based solutions, and cetrimide. There is strong evidence that chlorhexidine and denatured alcohol use to clean skin prior to surgery is better than any other commercially available antiseptic, such as povidone-iodine with alcohol.Its importance in health care was determined by Semmelweis in the 1840s. == References == Document 4::: Flash pasteurization Flash pasteurization, also called "high-temperature short-time" (HTST) processing, is a method of heat pasteurization of perishable beverages like fruit and vegetable juices, beer, wine, and some dairy products such as milk. Compared with other pasteurization processes, it maintains color and flavor better, but some cheeses were found to have varying responses to the process.Flash pasteurization is performed to kill spoilage microorganisms prior to filling containers, in order to make the products safer and to extend their shelf life compared to the unpasteurised foodstuff. For example, one manufacturer of flash pasteurizing machinery gives shelf life as "in excess of 12 months". It must be used in conjunction with sterile fill technology (similar to aseptic processing) to prevent post-pasteurization contamination.The liquid moves in a controlled, continuous flow while subjected to temperatures of 71.5 °C (160 °F) to 74 °C (165 °F), for about 15 to 30 seconds, followed by rapid cooling to between 4 °C (39.2 °F) and 5.5 °C (42 °F). Document 5::: Differential testing Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug.
epfl-collab
Consider the Diffie-Hellman secret-key-exchange algorithm performed in the cyclic group $(\mathbb{Z}/11\mathbb{Z}^\star, \cdot)$. Let $g=2$ be the chosen group generator. Suppose that Alice's secret number is $a=5$ and Bob's is $b=3$. Which common key $k$ does the algorithm lead to? Check the correct answer.
['$8$', '$10$', '$7$', '$9$']
B
null
Document 1::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,} Document 2::: Computational Diffie–Hellman assumption The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key. Document 3::: Group-based cryptography Group-based cryptography is a use of groups to construct cryptographic primitives. A group is a very general algebraic object and most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group. Document 4::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Document 5::: ElGamal cryptosystem In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. ElGamal encryption can be defined over any cyclic group G {\displaystyle G} , like multiplicative group of integers modulo n. Its security depends upon the difficulty of a certain problem in G {\displaystyle G} related to computing discrete logarithms.
epfl-collab
How many integers $n$ between $1$ and $2021$ satisfy $10^n \equiv 1 \mod 11$? Check the correct answer.
['505', '990', '1010', '183']
C
null
Document 1::: Reduced residue system In mathematics, a subset R of the integers is called a reduced residue system modulo n if: gcd(r, n) = 1 for each r in R, R contains φ(n) elements, no two elements of R are congruent modulo n.Here φ denotes Euler's totient function. A reduced residue system modulo n can be formed from a complete residue system modulo n by removing all integers not relatively prime to n. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-called totatives 1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. The cardinality of this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are: {13,17,19,23} {−11,−7,−5,−1} {−7,−13,13,31} {35,43,53,61} Document 2::: Euler totient Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1. Document 3::: Squarefree integer In mathematics, a square-free integer (or squarefree integer) is an integer which is divisible by no square number other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 32. The smallest positive square-free numbers are Document 4::: Partition number Furthermore p(n) = 0 when n is negative. The first few values of the partition function, starting with p(0) = 1, are: Some exact values of p(n) for larger values of n include: As of June 2022, the largest known prime number among the values of p(n) is p(1289844341), with 40,000 decimal digits. Until March 2022, this was also the largest prime that has been proved using elliptic curve primality proving. Document 5::: Blum integer In mathematics, a natural number n is a Blum integer if n = p × q is a semiprime for which p and q are distinct prime numbers congruent to 3 mod 4. That is, p and q must be of the form 4t + 3, for some integer t. Integers of this form are referred to as Blum primes. This means that the factors of a Blum integer are Gaussian primes with no imaginary part. The first few Blum integers are 21, 33, 57, 69, 77, 93, 129, 133, 141, 161, 177, 201, 209, 213, 217, 237, 249, 253, 301, 309, 321, 329, 341, 381, 393, 413, 417, 437, 453, 469, 473, 489, 497, ... (sequence A016105 in the OEIS)The integers were named for computer scientist Manuel Blum.
epfl-collab
You are given an i.i.d source with symbols taking value in the alphabet $\mathcal{A}=\{a,b,c,d\}$ and probabilities $\{1/8,1/8,1/4,1/2\}$. Consider making blocks of length $n$ and constructing a Huffman code that assigns a binary codeword to each block of $n$ symbols. Choose the correct statement regarding the average codeword length per source symbol.
['It is the same for all $n$.', 'In going from $n$ to $n+1$, for some $n$ it stays constant and for some it strictly decreases.', 'None of the others.', 'It strictly decreases as $n$ increases.']
A
null
Document 1::: Length-limited Huffman code In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. Document 2::: Length-limited Huffman code As in other entropy encoding methods, more common symbols are generally represented using fewer bits than less common symbols. Huffman's method can be efficiently implemented, finding a code in time linear to the number of input weights if these weights are sorted. However, although optimal among methods encoding symbols separately, Huffman coding is not always optimal among all compression methods - it is replaced with arithmetic coding or asymmetric numeral systems if a better compression ratio is required. Document 3::: Analog encryption Length of the code word is written as l ( C ( x ) ) . {\displaystyle l(C(x)).} Expected length of a code is l ( C ) = ∑ x ∈ X l ( C ( x ) ) P . Document 4::: Inductive probability Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. Document 5::: Entropy coder In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have expected code length greater or equal to the entropy of the source.More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies E x ∼ P ⁡ ≥ E x ∼ P ⁡ {\displaystyle \operatorname {E} _{x\sim P}\geq \operatorname {E} _{x\sim P}} , where ℓ {\displaystyle \ell } is the number of symbols in a code word, d {\displaystyle d} is the coding function, b {\displaystyle b} is the number of symbols used to make output codes and P {\displaystyle P} is the probability of the source symbol. An entropy coding attempts to approach this lower bound. Two of the most common entropy coding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful. These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding). Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost similar to Huffman coding.
epfl-collab
A bag contains the letters of LETSPLAY. Someone picks at random 4 letters from the bag without revealing the outcome to you. Subsequently you pick one letter at random among the remaining 4 letters. What is the entropy (in bits) of the random variable that models your choice? Check the correct answer.
['$\\log_2(8)$', '$\\log_2(7)$', '$2$', '$\x0crac{11}{4}$']
D
null
Document 1::: Shannon Entropy Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable X {\textstyle X} , which takes values in the alphabet X {\displaystyle {\mathcal {X}}} and is distributed according to p: X → {\displaystyle p:{\mathcal {X}}\to } such that p ( x ) := P {\displaystyle p(x):=\mathbb {P} }: Here E {\displaystyle \mathbb {E} } is the expected value operator, and I is the information content of X.: 11: 19–20 I ⁡ ( X ) {\displaystyle \operatorname {I} (X)} is itself a random variable. The entropy can explicitly be written as: where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.In the case of p ( x ) = 0 {\displaystyle p(x)=0} for some x ∈ X {\displaystyle x\in {\mathcal {X}}} , the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:: 13 One may also define the conditional entropy of two variables X {\displaystyle X} and Y {\displaystyle Y} taking values from sets X {\displaystyle {\mathcal {X}}} and Y {\displaystyle {\mathcal {Y}}} respectively, as:: 16 where p X , Y ( x , y ) := P {\displaystyle p_{X,Y}(x,y):=\mathbb {P} } and p Y ( y ) = P {\displaystyle p_{Y}(y)=\mathbb {P} } . This quantity should be understood as the remaining randomness in the random variable X {\displaystyle X} given the random variable Y {\displaystyle Y} . Document 2::: Shannon's entropy In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X {\displaystyle X} , which takes values in the alphabet X {\displaystyle {\mathcal {X}}} and is distributed according to p: X → {\displaystyle p\colon {\mathcal {X}}\to }: where Σ {\displaystyle \Sigma } denotes the sum over the variable's possible values. The choice of base for log {\displaystyle \log } , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". Document 3::: Random Variable A random variable X {\displaystyle X} is a measurable function X: Ω → E {\displaystyle X\colon \Omega \to E} from a sample space Ω {\displaystyle \Omega } as a set of possible outcomes to a measurable space E {\displaystyle E} . The technical axiomatic definition requires the sample space Ω {\displaystyle \Omega } to be a sample space of a probability triple ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )} (see the measure-theoretic definition). A random variable is often denoted by capital Roman letters such as X , Y , Z , T {\displaystyle X,Y,Z,T} .The probability that X {\displaystyle X} takes on a value in a measurable set S ⊆ E {\displaystyle S\subseteq E} is written as P ⁡ ( X ∈ S ) = P ⁡ ( { ω ∈ Ω ∣ X ( ω ) ∈ S } ) {\displaystyle \operatorname {P} (X\in S)=\operatorname {P} (\{\omega \in \Omega \mid X(\omega )\in S\})} Document 4::: Joint entropy , x n ) {\displaystyle P(x_{1},...,x_{n})} is the probability of these values occurring together, and P ( x 1 , . . . Document 5::: Joint entropy , X n {\displaystyle X_{1},...,X_{n}} , respectively, P ( x 1 , . . .
epfl-collab
Consider the group $(\mathbb{Z} / 23 \mathbb{Z}^*, \cdot)$. Find how many elements of the group are generators of the group. (Hint: $5$ is a generator of the group.)
['$10$', '$2$', '$22$', '$11$']
A
null
Document 1::: Growth rate (group theory) Suppose G is a finitely generated group; and T is a finite symmetric set of generators (symmetric means that if x ∈ T {\displaystyle x\in T} then x − 1 ∈ T {\displaystyle x^{-1}\in T} ). Any element x ∈ G {\displaystyle x\in G} can be expressed as a word in the T-alphabet x = a 1 ⋅ a 2 ⋯ a k where a i ∈ T . {\displaystyle x=a_{1}\cdot a_{2}\cdots a_{k}{\text{ where }}a_{i}\in T.} Consider the subset of all elements of G that can be expressed by such a word of length ≤ n B n ( G , T ) = { x ∈ G ∣ x = a 1 ⋅ a 2 ⋯ a k where a i ∈ T and k ≤ n } . Document 2::: Affine symmetric group One way of defining groups is by generators and relations. In this type of definition, generators are a subset of group elements that, when combined, produce all other elements. The relations of the definition are a system of equations that determine when two combinations of generators are equal. In this way, the affine symmetric group S ~ n {\displaystyle {\widetilde {S}}_{n}} is generated by a set of n elements that satisfy the following relations: when n ≥ 3 {\displaystyle n\geq 3} , s i 2 = 1 {\displaystyle s_{i}^{2}=1} (the generators are involutions), s i s j = s j s i {\displaystyle s_{i}s_{j}=s_{j}s_{i}} if j is not one of i − 1 , i , i + 1 {\displaystyle i-1,i,i+1} , indicating that for these pairs of generators, the group operation is commutative, and s i s i + 1 s i = s i + 1 s i s i + 1 {\displaystyle s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1}} .In the relations above, indices are taken modulo n, so that the third relation includes as a particular case s 0 s n − 1 s 0 = s n − 1 s 0 s n − 1 {\displaystyle s_{0}s_{n-1}s_{0}=s_{n-1}s_{0}s_{n-1}} . Document 3::: Finite cyclic group For any element g in any group G, one can form the subgroup that consists of all its integer powers: ⟨g⟩ = { gk | k ∈ Z }, called the cyclic subgroup generated by g. The order of g is |⟨g⟩|, the number of elements in ⟨g⟩, conventionally abbreviated as |g|, as ord(g), or as o(g). That is, the order of an element is equal to the order of the cyclic subgroup that it generates, A cyclic group is a group which is equal to one of its cyclic subgroups: G = ⟨g⟩ for some element g, called a generator of G. For a finite cyclic group G of order n we have G = {e, g, g2, ... , gn−1}, where e is the identity element and gi = gj whenever i ≡ j (mod n); in particular gn = g0 = e, and g−1 = gn−1. An abstract group defined by this multiplication is often denoted Cn, and we say that G is isomorphic to the standard cyclic group Cn. Such a group is also isomorphic to Z/nZ, the group of integers modulo n with the addition operation, which is the standard cyclic group in additive notation. Document 4::: Generating set of a group In abstract algebra, a generating set of a group is a subset of the group set such that every element of the group can be expressed as a combination (under the group operation) of finitely many elements of the subset and their inverses. In other words, if S {\displaystyle S} is a subset of a group G {\displaystyle G} , then ⟨ S ⟩ {\displaystyle \langle S\rangle } , the subgroup generated by S {\displaystyle S} , is the smallest subgroup of G {\displaystyle G} containing every element of S {\displaystyle S} , which is equal to the intersection over all subgroups containing the elements of S {\displaystyle S} ; equivalently, ⟨ S ⟩ {\displaystyle \langle S\rangle } is the subgroup of all elements of G {\displaystyle G} that can be expressed as the finite product of elements in S {\displaystyle S} and their inverses. (Note that inverses are only needed if the group is infinite; in a finite group, the inverse of an element can be expressed as a power of that element.) If G = ⟨ S ⟩ {\displaystyle G=\langle S\rangle } , then we say that S {\displaystyle S} generates G {\displaystyle G} , and the elements in S {\displaystyle S} are called generators or group generators. Document 5::: Generating set of a group Equivalent to saying an element x {\displaystyle x} generates a group is saying that ⟨ x ⟩ {\displaystyle \langle x\rangle } equals the entire group G {\displaystyle G} . For finite groups, it is also equivalent to saying that x {\displaystyle x} has order | G | {\displaystyle |G|} . A group may need an infinite number of generators.
epfl-collab
In RSA, we set $p = 7, q = 11, e = 13$. The public key is $(m, e) = (77, 13)$. The ciphertext we receive is $c = 14$. What is the message that was sent? (Hint: You may solve faster using Chinese remainder theorem.).
['$t=42$', '$t=7$', '$t=63$', '$t=14$']
A
null
Document 1::: RSA Cryptosystem An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers.The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Document 2::: RSA Cryptosystem Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used. Document 3::: RSA Cryptosystem RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem, one of the oldest, that is widely used for secure data transmission. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ) (the British signals intelligence agency) by the English mathematician Clifford Cocks. That system was declassified in 1997.In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private). Document 4::: RSA Factoring Challenge The RSA Factoring Challenge was a challenge put forward by RSA Laboratories on March 18, 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers and cracking RSA keys used in cryptography. They published a list of semiprimes (numbers with exactly two prime factors) known as the RSA numbers, with a cash prize for the successful factorization of some of them. The smallest of them, a 100-decimal digit number called RSA-100 was factored by April 1, 1991. Document 5::: Factoring integers The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure.
epfl-collab
Consider an RSA encryption where the public key is published as $(m, e) = (35, 11)$. Which one of the following numbers is a valid decoding exponent?
['$5$', '$11$', '$7$', '$17$']
B
null
Document 1::: RSA Cryptosystem An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers.The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Document 2::: Modular exponentiation Modular exponentiation is exponentiation performed over a modulus. It is useful in computer science, especially in the field of public-key cryptography, where it is used in both Diffie-Hellman Key Exchange and RSA public/private keys. Modular exponentiation is the remainder when an integer b (the base) is raised to the power e (the exponent), and divided by a positive integer m (the modulus); that is, c = be mod m. From the definition of division, it follows that 0 ≤ c < m. For example, given b = 5, e = 3 and m = 13, dividing 53 = 125 by 13 leaves a remainder of c = 8. Modular exponentiation can be performed with a negative exponent e by finding the modular multiplicative inverse d of b modulo m using the extended Euclidean algorithm. Document 3::: Blinding (cryptography) Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks. For example, in RSA blinding involves computing the blinding operation E(x) = (xr)e mod N, where r is a random integer between 1 and N and relatively prime to N (i.e. gcd(r, N) = 1), x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function f(z) = zd mod N is applied thus giving f(E(x)) = (xr)ed mod N = xr mod N. Finally it is unblinded using the function D(z) = zr−1 mod N. Multiplying xr mod N by r−1 mod N yields x, as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives. Document 4::: Modular exponentiation That is: c = be mod m = d−e mod m, where e < 0 and b ⋅ d ≡ 1 (mod m).Modular exponentiation is efficient to compute, even for very large integers. On the other hand, computing the modular discrete logarithm – that is, finding the exponent e when given b, c, and m – is believed to be difficult. This one-way function behavior makes modular exponentiation a candidate for use in cryptographic algorithms. Document 5::: RSA Cryptosystem Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used.
epfl-collab
Let $\mathcal{C}$ be a binary $(n,k)$ linear code with minimum distance $d_{\min} = 4$. Let $\mathcal{C}'$ be the code obtained by adding a parity-check bit $x_{n+1}=x_1 \oplus x_2 \oplus \cdots \oplus x_n$ at the end of each codeword of $\mathcal{C}$. Let $d_{\min}'$ be the minimum distance of $\mathcal{C}'$. Which of the following is true?
["$d_{\\min}' = 5$", "$d_{\\min}'$ can take different values depending on the code $\\mathcal{C}$.", "$d_{\\min}' = 6$", "$d_{\\min}' = 4$"]
D
null
Document 1::: Linear code The size of a code is the number of codewords and equals qk. The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance d of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. Document 2::: Hamming codes Hence the rate of Hamming codes is R = k / n = 1 − r / (2r − 1), which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length 2r − 1. The parity-check matrix of a Hamming code is constructed by listing all columns of length r that are non-zero, which means that the dual code of the Hamming code is the shortened Hadamard code, also known as a Simplex code. The parity-check matrix has the property that any two columns are pairwise linearly independent. Document 3::: Extended binary Golay code In mathematics and electronics engineering, a binary Golay code is a type of linear error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics. These codes are named in honor of Marcel J. E. Golay whose 1949 paper introducing them has been called, by E. R. Berlekamp, the "best single published page" in coding theory.There are two closely related binary Golay codes. The extended binary Golay code, G24 (sometimes just called the "Golay code" in finite group theory) encodes 12 bits of data in a 24-bit word in such a way that any 3-bit errors can be corrected or any 7-bit errors can be detected. The other, the perfect binary Golay code, G23, has codewords of length 23 and is obtained from the extended binary Golay code by deleting one coordinate position (conversely, the extended binary Golay code is obtained from the perfect binary Golay code by adding a parity bit). In standard coding notation, the codes have parameters and , corresponding to the length of the codewords, the dimension of the code, and the minimum Hamming distance between two codewords, respectively. Document 4::: Linear code A linear code of length n, dimension k, and distance d is called an code (or, more precisely, q {\displaystyle _{q}} code). We want to give F q n {\displaystyle \mathbb {F} _{q}^{n}} the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a binary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to. Document 5::: Permutation codes . , n }: σ ( i ) ≠ τ ( i ) } | {\displaystyle d(\tau ,\sigma )=|\left\{i\in \{1,2,...,n\}:\sigma (i)\neq \tau (i)\right\}|} The minimum distance of a permutation code C {\displaystyle C} is defined to be the minimum positive integer d m i n {\displaystyle d_{min}} such that there exist σ , τ {\displaystyle \sigma ,\tau } ∈ {\displaystyle \in } C {\displaystyle C} , distinct, such that d ( σ , τ ) = d m i n {\displaystyle d(\sigma ,\tau )=d_{min}} . One of the reasons why permutation codes are suitable for certain channels is that the alphabet symbols only appear once in each codeword, which for example makes the errors occurring in the context of powerline communication less impactful on codewords
epfl-collab
Let $\mathcal{C}$ be a $(n,k)$ Reed-Solomon code on $\mathbb{F}_q$. Let $\mathcal{C}'$ be the $(2n,k)$ code such that each codeword of $\mathcal{C}'$ is a codeword of $\mathcal{C}$ repeated twice, i.e., if $(x_1,\dots,x_n) \in\mathcal{C}$, then $(x_1,\dots,x_n,x_1,\dots,x_n)\in\mathcal{C'}$. What is the minimum distance of $\mathcal{C}'$?
['$2n-k+1$', '$2n-2k+2$', '$2n-k+2$', '$2n-2k+1$']
B
null
Document 1::: Permutation codes . , n }: σ ( i ) ≠ τ ( i ) } | {\displaystyle d(\tau ,\sigma )=|\left\{i\in \{1,2,...,n\}:\sigma (i)\neq \tau (i)\right\}|} The minimum distance of a permutation code C {\displaystyle C} is defined to be the minimum positive integer d m i n {\displaystyle d_{min}} such that there exist σ , τ {\displaystyle \sigma ,\tau } ∈ {\displaystyle \in } C {\displaystyle C} , distinct, such that d ( σ , τ ) = d m i n {\displaystyle d(\sigma ,\tau )=d_{min}} . One of the reasons why permutation codes are suitable for certain channels is that the alphabet symbols only appear once in each codeword, which for example makes the errors occurring in the context of powerline communication less impactful on codewords Document 2::: Folded Reed–Solomon code Something to be observed here is that the folding operation demonstrated does not change the rate R {\displaystyle R} of the original Reed–Solomon code. To prove this, consider a linear q {\displaystyle _{q}} code, of length n {\displaystyle n} , dimension k {\displaystyle k} and distance d {\displaystyle d} . The m {\displaystyle m} folding operation will make it a q m {\displaystyle \left_{q^{m}}} code. By this, the rate R = k n {\displaystyle R={\tfrac {k}{n}}} will be the same. Document 3::: Reed–Solomon code Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6. Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. Document 4::: Folded Reed–Solomon code In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping m {\displaystyle m} Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols. Folded Reed–Solomon codes are also a special case of Parvaresh–Vardy codes. Using optimal parameters one can decode with a rate of R, and achieve a decoding radius of 1 − R. The term "folded Reed–Solomon codes" was coined in a paper by V.Y. Krachkovsky with an algorithm that presented Reed–Solomon codes with many random "phased burst" errors . The list-decoding algorithm for folded RS codes corrects beyond the 1 − R {\displaystyle 1-{\sqrt {R}}} bound for Reed–Solomon codes achieved by the Guruswami–Sudan algorithm for such phased burst errors. Document 5::: Folded Reed–Solomon code The above definition is made more clear by means of the diagram with m = 3 {\displaystyle m=3} , where m {\displaystyle m} is the folding parameter. The message is denoted by f ( X ) {\displaystyle f(X)} , which when encoded using Reed–Solomon encoding, consists of values of f {\displaystyle f} at x 0 , x 1 , x 2 , … , x n − 1 {\displaystyle x_{0},x_{1},x_{2},\ldots ,x_{n-1}} , where x i = γ i {\displaystyle x_{i}=\gamma ^{i}} . Then bundling is performed in groups of 3 elements, to give a codeword of length n / 3 {\displaystyle n/3} over the alphabet F q 3 {\displaystyle \mathbb {F} _{q}^{3}} .
epfl-collab
Consider the following mysterious binary encoding:egin{center} egin{tabular}{c|c} symbol & encoding \ \hline $a$ & $??0$\ $b$ & $??0$\ $c$ & $??0$\ $d$ & $??0$ \end{tabular} \end{center} where with '$?$' we mean that we do not know which bit is assigned as the first two symbols of the encoding of any of the source symbols $a,b,c,d$. What can you infer on this encoding assuming that the code-words are all different?
['The encoding is uniquely-decodable but not prefix-free.', 'We do not possess enough information to say something about the code.', "It does not satisfy Kraft's Inequality.", 'The encoding is uniquely-decodable.']
D
null
Document 1::: Binary coding A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits (which is also called a byte) can represent any of 256 possible values and can, therefore, represent a wide variety of different items. In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Document 2::: Binary coding Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case a, if represented by the bit string 01100001 (as it is in the standard ASCII code), can also be represented as the decimal number "97". Document 3::: Inductive probability Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. Document 4::: Extended binary Golay code In mathematics and electronics engineering, a binary Golay code is a type of linear error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics. These codes are named in honor of Marcel J. E. Golay whose 1949 paper introducing them has been called, by E. R. Berlekamp, the "best single published page" in coding theory.There are two closely related binary Golay codes. The extended binary Golay code, G24 (sometimes just called the "Golay code" in finite group theory) encodes 12 bits of data in a 24-bit word in such a way that any 3-bit errors can be corrected or any 7-bit errors can be detected. The other, the perfect binary Golay code, G23, has codewords of length 23 and is obtained from the extended binary Golay code by deleting one coordinate position (conversely, the extended binary Golay code is obtained from the perfect binary Golay code by adding a parity bit). In standard coding notation, the codes have parameters and , corresponding to the length of the codewords, the dimension of the code, and the minimum Hamming distance between two codewords, respectively. Document 5::: List of binary codes This is a list of some binary codes that are (or have been) used to represent text as a sequence of binary digits "0" and "1". Fixed-width binary codes use a set number of bits to represent each character in the text, while in variable-width binary codes, the number of bits may vary from character to character. the binary codes are used to read the computer language.
epfl-collab
Suppose that you possess a $D$-ary encoding $\Gamma$ for the source $S$ that does not satisfy Kraft's Inequality. Specifically, in this problem, we assume that our encoding satisfies $\sum_{i=1}^n D^{-l_i} = k+1 $ with $k>0$. What can you infer on the average code-word length $L(S,\Gamma)$?
["The code would not be uniquely-decodable and thus we can't infer anything on its expected length.", '$L(S,\\Gamma) \\geq H_D(S)-\\log_D(e^k)$.', '$L(S,\\Gamma) \\geq k H_D(S)$.', '$L(S,\\Gamma) \\geq \x0crac{H_D(S)}{k}$.']
B
null
Document 1::: Kraft–McMillan theorem In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McMillan's version) for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory. Kraft's inequality was published in Kraft (1949). However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The result was independently discovered in McMillan (1956). McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob. Document 2::: Analog encryption Length of the code word is written as l ( C ( x ) ) . {\displaystyle l(C(x)).} Expected length of a code is l ( C ) = ∑ x ∈ X l ( C ( x ) ) P . Document 3::: Long code (mathematics) Since there are only 2 k {\displaystyle 2^{k}} such functions, the block length of the Walsh-Hadamard code is 2 k {\displaystyle 2^{k}} . An equivalent definition of the long code is as follows: The Long code encoding of j ∈ {\displaystyle j\in } is defined to be the truth table of the Boolean dictatorship function on the j {\displaystyle j} th coordinate, i.e., the truth table of f: { 0 , 1 } n → { 0 , 1 } {\displaystyle f:\{0,1\}^{n}\to \{0,1\}} with f ( x 1 , … , x n ) = x j {\displaystyle f(x_{1},\dots ,x_{n})=x_{j}} . Thus, the Long code encodes a ( log ⁡ n ) {\displaystyle (\log n)} -bit string as a 2 n {\displaystyle 2^{n}} -bit string. Document 4::: Length-limited Huffman code In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. Document 5::: Length-limited Huffman code As in other entropy encoding methods, more common symbols are generally represented using fewer bits than less common symbols. Huffman's method can be efficiently implemented, finding a code in time linear to the number of input weights if these weights are sorted. However, although optimal among methods encoding symbols separately, Huffman coding is not always optimal among all compression methods - it is replaced with arithmetic coding or asymmetric numeral systems if a better compression ratio is required.
epfl-collab
A colleague challenges you to create a $(n-1,k,d_{min})$ code $\mathcal C'$ from a $(n,k,d_{min})$ code $\mathcal C$ as follows: given a generator matrix $G$ that generates $\mathcal C$, drop one column from $G$. Then, generate the new code with this truncated $k imes (n-1)$ generator matrix. The catch is that your colleague only gives you a set $\mathcal S=\{\vec s_1,\vec s_2, \vec s_3\}$ of $3$ columns of $G$ that you are allowed to drop, where $\vec s_1$ is the all-zeros vector, $\vec s_2$ is the all-ones vector, and $\vec s_3$ is a canonical basis vector. From the length of the columns $s_i$ you can infer $k$. You do not know $n$, neither do you know anything about the $n-3$ columns of $G$ that are not in $\mathcal S$. However, your colleague tells you that $G$ is in systematic form, i.e., $G=[I ~~ P]$ for some unknown $P$, and that all of the elements in $\mathcal S$ are columns of $P$. Which of the following options in $\mathcal S$ would you choose as the column of $G$ to drop?
['$\\vec s_2$ (the all-ones vector)', '$\\vec s_1$ (the all-zeros vector).', '$\\vec s_3$ (one of the canonical basis vectors).', 'It is impossible to guarantee that dropping a column from $\\mathcal S$ will not decrease the minimum distance.']
B
null
Document 1::: Generator matrix In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Document 2::: Minimum polynomial extrapolation . , x k {\displaystyle x_{1},x_{2},...,x_{k}} in R n {\displaystyle \mathbb {R} ^{n}} , one constructs the n × ( k − 1 ) {\displaystyle n\times (k-1)} matrix U = ( x 2 − x 1 , x 3 − x 2 , . . Document 3::: DNA code construction Here, the elements of E {\displaystyle {\mathit {E}}} lie in the Galois field GF ( p ) {\displaystyle {\text{GF}}(p)} . By definition, a generalized Hadamard matrix H {\displaystyle {\mathit {H}}} in its standard form has only 1s in its first row and column. The ( n − 1 ) × ( n − 1 ) {\displaystyle ({\mathit {n}}-1)\times ({\mathit {n}}-1)} square matrix formed by the remaining entries of H {\displaystyle H} is called the core of H {\displaystyle {\mathit {H}}} , and the corresponding submatrix of the exponent matrix E {\displaystyle {\mathit {E}}} is called the core of construction. Thus, by omission of the all-zero first column cyclic generalized Hadamard codes are possible, whose codewords are the row vectors of the punctured matrix. Also, the rows of such an exponent matrix satisfy the following two properties: (i) in each of the nonzero rows of the exponent matrix, each element of Z p {\displaystyle \mathbb {Z} _{p}} appears a constant number, n / p {\displaystyle {\mathit {n}}/{\mathit {p}}} , of times; and (ii) the Hamming distance between any two rows is n ( p − 1 ) / p {\displaystyle {\mathit {n}}({\mathit {p}}-1)/{\mathit {p}}} . Document 4::: Minimum polynomial extrapolation The number 1 is then appended to the end of c {\displaystyle c} , and the extrapolated limit is s = X c ∑ i = 1 k c i , {\displaystyle s={Xc \over \sum _{i=1}^{k}c_{i}},} where X = ( x 2 , x 3 , . . . , x k + 1 ) {\displaystyle X=(x_{2},x_{3},...,x_{k+1})} is the matrix whose columns are the k {\displaystyle k} iterates starting at 2. The following 4 line MATLAB code segment implements the MPE algorithm: == References == Document 5::: Nest algebra Let us work in the n {\displaystyle n} -dimensional complex vector space C n {\displaystyle \mathbb {C} ^{n}} , and let e 1 , e 2 , … , e n {\displaystyle e_{1},e_{2},\dots ,e_{n}} be the standard basis. For j = 0 , 1 , 2 , … , n {\displaystyle j=0,1,2,\dots ,n} , let S j {\displaystyle S_{j}} be the j {\displaystyle j} -dimensional subspace of C n {\displaystyle \mathbb {C} ^{n}} spanned by the first j {\displaystyle j} basis vectors e 1 , … , e j {\displaystyle e_{1},\dots ,e_{j}} . Let N = { ( 0 ) = S 0 , S 1 , S 2 , … , S n − 1 , S n = C n } ; {\displaystyle N=\{(0)=S_{0},S_{1},S_{2},\dots ,S_{n-1},S_{n}=\mathbb {C} ^{n}\};} then N is a subspace nest, and the corresponding nest algebra of n × n complex matrices M leaving each subspace in N invariant that is, satisfying M S ⊆ S {\displaystyle MS\subseteq S} for each S in N – is precisely the set of upper-triangular matrices. If we omit one or more of the subspaces Sj from N then the corresponding nest algebra consists of block upper-triangular matrices.
epfl-collab
A binary prefix-free code $\Gamma$ is made of four codewords. The first three codewords have codeword lengths $\ell_1 = 2$, $\ell_2 = 3$ and $\ell_3 = 3$. What is the minimum possible length for the fourth codeword?
['$3$.', '$4$.', '$2$.', '$1$.']
D
null
Document 1::: Length-limited Huffman code In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. Document 2::: Hamming codes Hence the rate of Hamming codes is R = k / n = 1 − r / (2r − 1), which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length 2r − 1. The parity-check matrix of a Hamming code is constructed by listing all columns of length r that are non-zero, which means that the dual code of the Hamming code is the shortened Hadamard code, also known as a Simplex code. The parity-check matrix has the property that any two columns are pairwise linearly independent. Document 3::: Linear block codes A linear code of length n transmits blocks containing n symbols. For example, the Hamming code is a linear binary code which represents 4-bit messages using 7-bit codewords. Document 4::: Linear code The size of a code is the number of codewords and equals qk. The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance d of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. Document 5::: Extended binary Golay code In mathematics and electronics engineering, a binary Golay code is a type of linear error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics. These codes are named in honor of Marcel J. E. Golay whose 1949 paper introducing them has been called, by E. R. Berlekamp, the "best single published page" in coding theory.There are two closely related binary Golay codes. The extended binary Golay code, G24 (sometimes just called the "Golay code" in finite group theory) encodes 12 bits of data in a 24-bit word in such a way that any 3-bit errors can be corrected or any 7-bit errors can be detected. The other, the perfect binary Golay code, G23, has codewords of length 23 and is obtained from the extended binary Golay code by deleting one coordinate position (conversely, the extended binary Golay code is obtained from the perfect binary Golay code by adding a parity bit). In standard coding notation, the codes have parameters and , corresponding to the length of the codewords, the dimension of the code, and the minimum Hamming distance between two codewords, respectively.
epfl-collab
Determine which of the following compound propositions are satisfiable (more than one answer can be correct):
['(p → q)∧(p → ¬q)∧(¬p → q)', '(p∨¬q)∧(¬p∨q)∧(¬p∨¬q)', 'None of the other options', '(p↔q)∧(¬p↔q)']
B
null
Document 1::: Satisfiability problem Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such as Peano arithmetic are satisfiable because they are true in the natural numbers. Document 2::: Propositional satisfiability For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. Document 3::: Propositional satisfiability For example, x1 is a positive literal, ¬x2 is a negative literal, x1 ∨ ¬x2 is a clause. The formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). Document 4::: Propositional satisfiability In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. Document 5::: Propositional satisfiability A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence.
epfl-collab
Let P be the statement ∀x(x>-3 -> x>3). Determine for which domain P evaluates to true:
['x>3', '-3<x<3', 'None of the other options', 'x>-3']
A
null
Document 1::: Planar SAT In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. Document 2::: Bound variable For example, consider the following expression in which both variables are bound by logical quantifiers: ∀ y ∃ x ( x = y ) . {\displaystyle \forall y\,\exists x\,\left(x={\sqrt {y}}\right).} This expression evaluates to false if the domain of x {\displaystyle x} and y {\displaystyle y} is the real numbers, but true if the domain is the complex numbers. The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis. Document 3::: If and only if The truth table of P ⇔ {\displaystyle \Leftrightarrow } Q is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. Document 4::: Binary decision diagram The left figure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each representing the function f ( x 1 , x 2 , x 3 ) {\displaystyle f(x1,x2,x3)} . In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find f ( 0 , 1 , 1 ) {\displaystyle f(0,1,1)} , begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). Document 5::: Binary decision diagrams For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed.
epfl-collab
Let p(x,y) be the statement “x visits y”, where the domain of x consists of all the humans in the world and the domain of y consists of all the places in the world. Use quantifiers to express the following statement: There is a place in the world that has never been visited by humans.
['∀y ∀x ¬p(x,y)', '∀y ∃x ¬p(x,y)', '¬(∀y ∃x ¬p(x,y))', '∃y ∀x ¬p(x,y)']
D
null
Document 1::: Logical quantifier In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier ∀ {\displaystyle \forall } in the first order formula ∀ x P ( x ) {\displaystyle \forall xP(x)} expresses that everything in the domain satisfies the property denoted by P {\displaystyle P} . On the other hand, the existential quantifier ∃ {\displaystyle \exists } in the formula ∃ x P ( x ) {\displaystyle \exists xP(x)} expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. Document 2::: Logical quantifier They can also be used to define more complex quantifiers, as in the formula ¬ ∃ x P ( x ) {\displaystyle \neg \exists xP(x)} which expresses that nothing has the property P {\displaystyle P} . Other quantifiers are only definable within second order logic or higher order logics. Quantifiers have been generalized beginning with the work of Mostowski and Lindström. Document 3::: Logical quantifier A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable. The most commonly used quantifiers are ∀ {\displaystyle \forall } and ∃ {\displaystyle \exists } . These quantifiers are standardly defined as duals; in classical logic, they are interdefinable using negation. Document 4::: Counting quantifiers A counting quantifier is a mathematical term for a quantifier of the form "there exists at least k elements that satisfy property X". In first-order logic with equality, counting quantifiers can be defined in terms of ordinary quantifiers, so in this context they are a notational shorthand. However, they are interesting in the context of logics such as two-variable logic with counting that restrict the number of variables in formulas. Also, generalized counting quantifiers that say "there exists infinitely many" are not expressible using a finite number of formulas in first-order logic. Document 5::: Sentence (mathematical logic) In mathematical logic, a sentence (or closed formula) of a predicate logic is a Boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, something that must be true or false. The restriction of having no free variables is needed to make sure that sentences can have concrete, fixed truth values: as the free variables of a (general) formula can range over several values, the truth value of such a formula may vary. Sentences without any logical connectives or quantifiers in them are known as atomic sentences; by analogy to atomic formula.
epfl-collab
Which of the following arguments is correct?
['Everyone who eats vegetables every day is healthy. Linda is not healthy. Therefore, Linda does not eat vegetables every day.', 'Every physics major takes calculus. Mathilde is taking calculus. Therefore, Mathilde is a physics major.', 'All cats like milk. My pet is not a cat. Therefore, my pet does not like milk.', 'All students in this class understand math. Alice is a student in this class. Therefore, Alice doesn’t understand math.']
A
null
Document 1::: Formal logic Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. Document 2::: Logical argument An argument is a series of sentences, statements or propositions some of which are called premises and one is the conclusion. The purpose of an argument is to give reasons for one's conclusion via justification, explanation, and/or persuasion. Arguments are intended to determine or show the degree of truth or acceptability of another statement called a conclusion. Arguments can be studied from three main perspectives: the logical, the dialectical and the rhetorical perspective.In logic, an argument is usually expressed not in natural language but in a symbolic formal language, and it can be defined as any group of propositions of which one is claimed to follow from the others through deductively valid inferences that preserve truth from the premises to the conclusion. Document 3::: Logical reasoning Deductive reasoning offers the strongest support: the premises ensure the conclusion, meaning that it is impossible for the conclusion to be false if all the premises are true. Such an argument is called a valid argument, for example: all men are mortal; Socrates is a man; therefore, Socrates is mortal. For valid arguments, it is not important whether the premises are actually true but only that, if they were true, the conclusion could not be false. Document 4::: Logical argument This logical perspective on argument is relevant for scientific fields such as mathematics and computer science. Logic is the study of the forms of reasoning in arguments and the development of standards and criteria to evaluate arguments. Deductive arguments can be valid, and the valid ones can be sound: in a valid argument, premisses necessitate the conclusion, even if one or more of the premises is false and the conclusion is false; in a sound argument, true premises necessitate a true conclusion. Document 5::: Mathematical reasoning Deductive reasoning offers the strongest support: the premises ensure the conclusion, meaning that it is impossible for the conclusion to be false if all the premises are true. Such an argument is called a valid argument, for example: all men are mortal; Socrates is a man; therefore, Socrates is mortal. For valid arguments, it is not important whether the premises are actually true but only that, if they were true, the conclusion could not be false.
epfl-collab
You are given the following collection of premises: If I go to the museum, it either rains or snows. I went to the museum on Saturday or I went to the museum on Sunday. It did not rain and it did not snow on Saturday. It did not rain on Sunday. Which conclusions can be drawn from these premises ? (more than one answer can be correct)
['I went to the museum on Saturday.', 'I went to the museum on Sunday.', 'It snowed on Sunday.', 'It was warm on Saturday.']
B
null
Document 1::: A Treatise on Probability They can't be compared. Is our expectation of rain, when we start out for a walk, always more likely than not, or less likely than not, or as likely as not? Document 2::: Formal logic Logic studies arguments, which consist of a set of premises together with a conclusion. An example is the argument from the premises "it's Sunday" and "if it's Sunday then I don't have to work" to the conclusion "I don't have to work". Premises and conclusions express propositions or claims that can be true or false. Document 3::: Corresponding conditional In logic, the corresponding conditional of an argument (or derivation) is a material conditional whose antecedent is the conjunction of the argument's (or derivation's) premises and whose consequent is the argument's conclusion. An argument is valid if and only if its corresponding conditional is a logical truth. It follows that an argument is valid if and only if the negation of its corresponding conditional is a contradiction. Therefore, the construction of a corresponding conditional provides a useful technique for determining the validity of an argument. Document 4::: Foundations of statistics For instance, a weather forecast indicating a 90% probability of rain means it will likely rain, while a 5% probability means it is unlikely to rain. The actual outcome, whether it rains or not, can only be determined after the event. Statistics is also fundamental to other disciplines of science that involve predicting or classifying events based on a large set of data. Document 5::: A Treatise on Probability I am prepared to argue that on some occasions none of these alternatives hold, and that it will be an arbitrary matter to decide for or against the umbrella. If the barometer is high, but the clouds are black, it is not always rational that one should prevail over the other in our minds, or even that we should balance them, though it will be rational to allow caprice to determine us and to waste no time on the debate. == References ==
epfl-collab
Suppose we have the following function \(f: [0, 2] o [-\pi, \pi] \). \[f(x) = egin{cases} x^2 & ext{ for } 0\leq x < 1\ 2-(x-2)^2 & ext{ for } 1 \leq x \leq 2 \end{cases} \]
['\\(f\\) is bijective.', '\\(f\\) is surjective but not injective.', '\\(f\\) is not injective and not surjective.', '\\(f\\) is injective but not surjective.']
D
null
Document 1::: Function representation , x n ) ≥ 0 {\displaystyle f(x_{1},x_{2},...,x_{n})\geq 0} belong to the object, and the points with f ( x 1 , x 2 , . . . Document 2::: E-function A function f(x) is called of type E, or an E-function, if the power series f ( x ) = ∑ n = 0 ∞ c n x n n ! {\displaystyle f(x)=\sum _{n=0}^{\infty }c_{n}{\frac {x^{n}}{n!}}} satisfies the following three conditions: All the coefficients cn belong to the same algebraic number field, K, which has finite degree over the rational numbers; For all ε > 0, | c n | ¯ = O ( n n ε ) {\displaystyle {\overline {\left|c_{n}\right|}}=O\left(n^{n\varepsilon }\right)} ,where the left hand side represents the maximum of the absolute values of all the algebraic conjugates of cn; For all ε > 0 there is a sequence of natural numbers q0, q1, q2,... such that qnck is an algebraic integer in K for k=0, 1, 2,..., n, and n = 0, 1, 2,... and for which q n = O ( n n ε ) {\displaystyle q_{n}=O\left(n^{n\varepsilon }\right)} .The second condition implies that f is an entire function of x. Document 3::: Generating function transformation In mathematics, a transformation of a sequence's generating function provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas applied to a sequence generating function (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Given a sequence, { f n } n = 0 ∞ {\displaystyle \{f_{n}\}_{n=0}^{\infty }} , the ordinary generating function (OGF) of the sequence, denoted F ( z ) {\displaystyle F(z)} , and the exponential generating function (EGF) of the sequence, denoted F ^ ( z ) {\displaystyle {\widehat {F}}(z)} , are defined by the formal power series F ( z ) = ∑ n = 0 ∞ f n z n = f 0 + f 1 z + f 2 z 2 + ⋯ {\displaystyle F(z)=\sum _{n=0}^{\infty }f_{n}z^{n}=f_{0}+f_{1}z+f_{2}z^{2}+\cdots } F ^ ( z ) = ∑ n = 0 ∞ f n n ! z n = f 0 0 ! Document 4::: Synchrotron function In mathematics the synchrotron functions are defined as follows (for x ≥ 0): First synchrotron function F ( x ) = x ∫ x ∞ K 5 3 ( t ) d t {\displaystyle F(x)=x\int _{x}^{\infty }K_{\frac {5}{3}}(t)\,dt} Second synchrotron function G ( x ) = x K 2 3 ( x ) {\displaystyle G(x)=xK_{\frac {2}{3}}(x)} where Kj is the modified Bessel function of the second kind. Document 5::: Triangular function Note that some authors instead define the triangle function to have a base of width 1 instead of width 2: tri ⁡ ( 2 x ) = Λ ( 2 x ) = def max ( 1 − 2 | x | , 0 ) = { 1 − 2 | x | , | x | < 1 2 ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} (2x)=\Lambda (2x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-2|x|,0{\big )}\\&={\begin{cases}1-2|x|,&|x|<{\tfrac {1}{2}};\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}} In its most general form a triangular function is any linear B-spline: tri j ⁡ ( x ) = { ( x − x j − 1 ) / ( x j − x j − 1 ) , x j − 1 ≤ x < x j ; ( x j + 1 − x ) / ( x j + 1 − x j ) , x j ≤ x < x j + 1 ; 0 otherwise . {\displaystyle \operatorname {tri} _{j}(x)={\begin{cases}(x-x_{j-1})/(x_{j}-x_{j-1}),&x_{j-1}\leq x
epfl-collab
Which of the following functions \( f :\mathbb{Z} imes \mathbb{Z} o \mathbb{Z} \) are surjective?
['\\( f(m,n)=m \\)', '\\( f(m,n)=m+n \\)', '\\( f(m,n)=|n| \\)', '\\( f(m,n)=m^2+n^2 \\)']
B
null
Document 1::: Identity map Formally, if M is a set, the identity function f on M is defined to be a function with M as its domain and codomain, satisfying In other words, the function value f(X) in the codomain M is always the same as the input element X in the domain M. The identity function on M is clearly an injective function as well as a surjective function, so it is bijective.The identity function f on M is often denoted by idM. In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M. Document 2::: Surjective homomorphism Formally, a map f: A → B {\displaystyle f:A\to B} preserves an operation μ {\displaystyle \mu } of arity k {\displaystyle k} , defined on both A {\displaystyle A} and B {\displaystyle B} if f ( μ A ( a 1 , … , a k ) ) = μ B ( f ( a 1 ) , … , f ( a k ) ) , {\displaystyle f(\mu _{A}(a_{1},\ldots ,a_{k}))=\mu _{B}(f(a_{1}),\ldots ,f(a_{k})),} for all elements a 1 , . . Document 3::: Surjective homomorphism For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function x ↦ e x {\displaystyle x\mapsto e^{x}} satisfies e x + y = e x e y , {\displaystyle e^{x+y}=e^{x}e^{y},} and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies ln ⁡ ( x y ) = ln ⁡ ( x ) + ln ⁡ ( y ) , {\displaystyle \ln(xy)=\ln(x)+\ln(y),} and is also a group homomorphism. Document 4::: Completely multiplicative function A completely multiplicative function (or totally multiplicative function) is an arithmetic function (that is, a function whose domain is the natural numbers), such that f(1) = 1 and f(ab) = f(a)f(b) holds for all positive integers a and b.Without the requirement that f(1) = 1, one could still have f(1) = 0, but then f(a) = 0 for all positive integers a, so this is not a very strong restriction. The definition above can be rephrased using the language of algebra: A completely multiplicative function is a homomorphism from the monoid ( Z + , ⋅ ) {\displaystyle (\mathbb {Z} ^{+},\cdot )} (that is, the positive integers under multiplication) to some other monoid. Document 5::: Surjective homomorphism . , a k {\displaystyle a_{1},...,a_{k}} in A {\displaystyle A} . The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants.
epfl-collab
Let \(A = \{a, b, c, d, ..., z\}\) be the set of lower cased English letters. Let \(S = \{a, b, ab, cd, ae, xy, ord, ...\}\) be the set of all strings using \(A\) as an alphabet. Given \(s\in S\), \(N(s)\) is the number of vowels in \(s\). For example,\(N(algrzqi) = 2\), \(N(bebebe) = 3\). We say \((s, t)\) belongs to relation \(R\) if \(N(s) \leq N(t)\). Which of the following statements are true (more than one answer can be correct) ?
['\\(R\\) is transitive.', '\\(R\\) is not an equivalence relation.', '\\(R\\) is symmetric.', '\\(R\\) is reflexive. ']
D
null
Document 1::: Enumeration reducibility Let lower case letters n , x . . . Document 2::: Weight (strings) The a {\displaystyle a} -weight of a string, for a letter a {\displaystyle a} , is the number of times that letter occurs in the string. More precisely, let A {\displaystyle A} be a finite set (called the alphabet), a ∈ A {\displaystyle a\in A} a letter of A {\displaystyle A} , and c ∈ A ∗ {\displaystyle c\in A^{*}} a string (where A ∗ {\displaystyle A^{*}} is the free monoid generated by the elements of A {\displaystyle A} , equivalently the set of strings, including the empty string, whose letters are from A {\displaystyle A} ). Then the a {\displaystyle a} -weight of c {\displaystyle c} , denoted by w t a ( c ) {\displaystyle \mathrm {wt} _{a}(c)} , is the number of times the generator a {\displaystyle a} occurs in the unique expression for c {\displaystyle c} as a product (concatenation) of letters in A {\displaystyle A} . If A {\displaystyle A} is an abelian group, the Hamming weight w t ( c ) {\displaystyle \mathrm {wt} (c)} of c {\displaystyle c} , often simply referred to as "weight", is the number of nonzero letters in c {\displaystyle c} . Document 3::: List of set identities and relations A sequence or net S ∙ {\displaystyle S_{\bullet }} of set is called increasing or non-decreasing if (resp. decreasing or non-increasing) if for all indices i ≤ j , {\displaystyle i\leq j,} S i ⊆ S j {\displaystyle S_{i}\subseteq S_{j}} (resp. S i ⊇ S j {\displaystyle S_{i}\supseteq S_{j}} ). Document 4::: Colexicographical order Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering. An important property of the lexicographical order is that for each n, the set of words of length n is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length n is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element. Document 5::: Suffix array Let S = S S . . . S {\displaystyle S=SS...S} be an n {\textstyle n} -string and let S {\displaystyle S} denote the substring of S {\displaystyle S} ranging from i {\displaystyle i} to j {\displaystyle j} inclusive. The suffix array A {\displaystyle A} of S {\displaystyle S} is now defined to be an array of integers providing the starting positions of suffixes of S {\displaystyle S} in lexicographical order. This means, an entry A {\displaystyle A} contains the starting position of the i {\displaystyle i} -th smallest suffix in S {\displaystyle S} and thus for all 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n}: S , n ] < S , n ] {\displaystyle S,n]
epfl-collab
If A is an uncountable set and B is an uncountable set, A − B cannot be :
['none of the other options', 'uncountable', 'countably infinite', 'the null set']
A
null
Document 1::: Uncountable set In mathematics, an uncountable set (or uncountably infinite set) is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal number is larger than that of the set of all natural numbers. Document 2::: Cardinality Two sets A and B have the same cardinality if there exists a bijection (a.k.a., one-to-one correspondence) from A to B, that is, a function from A to B that is both injective and surjective. Such sets are said to be equipotent, equipollent, or equinumerous. This relationship can also be denoted A ≈ B or A ~ B.For example, the set E = {0, 2, 4, 6, ...} of non-negative even numbers has the same cardinality as the set N = {0, 1, 2, 3, ...} of natural numbers, since the function f(n) = 2n is a bijection from N to E (see picture).For finite sets A and B, if some bijection exists from A to B, then each injective or surjective function from A to B is a bijection. This is no longer true for infinite A and B. For example, the function g from N to E, defined by g(n) = 4n is injective, but not surjective, and h from N to E, defined by h(n) = n - (n mod 2) is surjective, but not injective. Neither g nor h can challenge |E| = |N|, which was established by the existence of f. Document 3::: Cardinality A has cardinality less than or equal to the cardinality of B, if there exists an injective function from A into B. Document 4::: Infinite (cardinality) In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable. Document 5::: Cardinality A has cardinality strictly less than the cardinality of B, if there is an injective function, but no bijective function, from A to B.For example, the set N of all natural numbers has cardinality strictly less than its power set P(N), because g(n) = { n } is an injective function from N to P(N), and it can be shown that no function from N to P(N) can be bijective (see picture). By a similar argument, N has cardinality strictly less than the cardinality of the set R of all real numbers. For proofs, see Cantor's diagonal argument or Cantor's first uncountability proof.If |A| ≤ |B| and |B| ≤ |A|, then |A| = |B| (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that |A| ≤ |B| or |B| ≤ |A| for every A, B.
epfl-collab
You need to quickly find if a person's name is in a list: that contains both integers and strings such as: list := ["Adam Smith", "Kurt Gödel", 499, 999.95, "Bertrand Arthur William Russell", 19.99, ...] What strategy can you use?
['Bubble sort the list, then use binary search.', 'Use binary search.', 'Insertion sort the list, then use binary search.', 'Use linear search.']
D
null
Document 1::: Rabin–Karp string search algorithm In computer science, the Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses hashing to find an exact match of a pattern string in a text. It uses a rolling hash to quickly filter out positions of the text that cannot match the pattern, and then checks for a match at the remaining positions. Generalizations of the same idea can be used to find more than one match of a single pattern, or to find matches for more than one pattern. To find a single match of a single pattern, the expected time of the algorithm is linear in the combined length of the pattern and text, although its worst-case time complexity is the product of the two lengths. Document 2::: Two-way string-matching algorithm In computer science, the two-way string-matching algorithm is a string-searching algorithm, discovered by Maxime Crochemore and Dominique Perrin in 1991. It takes a pattern of size m, called a “needle”, preprocesses it in linear time O(m), producing information that can then be used to search for the needle in any “haystack” string, taking only linear time O(n) with n being the haystack's length. The two-way algorithm can be viewed as a combination of the forward-going Knuth–Morris–Pratt algorithm (KMP) and the backward-running Boyer–Moore string-search algorithm (BM). Like those two, the 2-way algorithm preprocesses the pattern to find partially repeating periods and computes “shifts” based on them, indicating what offset to “jump” to in the haystack when a given character is encountered. Document 3::: Aho–Corasick algorithm In computer science, the Aho–Corasick algorithm is a string-searching algorithm invented by Alfred V. Aho and Margaret J. Corasick in 1975. It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text. It matches all strings simultaneously. The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. Document 4::: Lists of mathematicians Lists of mathematicians cover notable mathematicians by nationality, ethnicity, religion, profession and other characteristics. Alphabetical lists are also available (see table to the right). Document 5::: Boyer–Moore string-search algorithm In computer science, the Boyer–Moore string-search algorithm is an efficient string-searching algorithm that is the standard benchmark for practical string-search literature. It was developed by Robert S. Boyer and J Strother Moore in 1977. The original paper contained static tables for computing the pattern shifts without an explanation of how to produce them.
epfl-collab
Let S(x) be the statement “x has been in a lake” and L(x) be the statement “x lives in Lausanne” and the domain of x consists of all the humans in the world. The sentence : “there exists exactly one human that lives in Lausanne and that has never been in a lake” corresponds to the statement (multiple choices possible):
['\\( \\exists x \\Bigr[( S(x) \\wedge \neg L(x)) \\wedge \x0corall y \\left[ \neg( S(y) \\wedge \neg L(y)) \\wedge (x=y) \right] \\Bigr] \\)', '\\( \\exists x \\Bigr[ (\neg S(x) \\wedge L(x)) \\wedge \x0corall y \\left[ \neg(\neg S(y) \\wedge L(y)) \x0bee (x=y) \right] \\Bigr] \\)', '\\( \\exists! x (S(x) \\wedge L(x)) \\)', '\\( \\exists! x (\neg S(x) \\wedge L(x)) \\)']
D
null
Document 1::: Statement (logic) In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence). Document 2::: Statement (logic) In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence). Document 3::: Sentence (mathematical logic) In mathematical logic, a sentence (or closed formula) of a predicate logic is a Boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, something that must be true or false. The restriction of having no free variables is needed to make sure that sentences can have concrete, fixed truth values: as the free variables of a (general) formula can range over several values, the truth value of such a formula may vary. Sentences without any logical connectives or quantifiers in them are known as atomic sentences; by analogy to atomic formula. Document 4::: Atomic fact In logic and analytic philosophy, an atomic sentence is a type of declarative sentence which is either true or false (may also be referred to as a proposition, statement or truthbearer) and which cannot be broken down into other simpler sentences. For example, "The dog ran" is an atomic sentence in natural language, whereas "The dog ran and the cat hid" is a molecular sentence in natural language. From a logical analysis point of view, the truth or falsity of sentences in general is determined by only two things: the logical form of the sentence and the truth or falsity of its simple sentences. This is to say, for example, that the truth of the sentence "John is Greek and John is happy" is a function of the meaning of "and", and the truth values of the atomic sentences "John is Greek" and "John is happy". Document 5::: Logical quantifier In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier ∀ {\displaystyle \forall } in the first order formula ∀ x P ( x ) {\displaystyle \forall xP(x)} expresses that everything in the domain satisfies the property denoted by P {\displaystyle P} . On the other hand, the existential quantifier ∃ {\displaystyle \exists } in the formula ∃ x P ( x ) {\displaystyle \exists xP(x)} expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula.
epfl-collab
Let \( f : A ightarrow B \) be a function from A to B such that \(f (a) = |a| \). f is a bijection if:
['\\( A= [-1, 1] \\) and \\(B= [-1, 1] \\)', '\\( A= [-1, 0] \\) and \\(B= [0, 1] \\)', '\\( A= [0, 1] \\) and \\(B= [-1, 0] \\)', '\\( A= [-1, 0] \\) and \\(B= [-1, 0] \\)']
B
null
Document 1::: Bijective relation In mathematics, a bijection, also known as a bijective function, one-to-one correspondence, or invertible function, is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set; there are no unpaired elements between the two sets. In mathematical terms, a bijective function f: X → Y is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y. The term one-to-one correspondence must not be confused with one-to-one function (an injective function; see figures). A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements. Document 2::: Cardinality Two sets A and B have the same cardinality if there exists a bijection (a.k.a., one-to-one correspondence) from A to B, that is, a function from A to B that is both injective and surjective. Such sets are said to be equipotent, equipollent, or equinumerous. This relationship can also be denoted A ≈ B or A ~ B.For example, the set E = {0, 2, 4, 6, ...} of non-negative even numbers has the same cardinality as the set N = {0, 1, 2, 3, ...} of natural numbers, since the function f(n) = 2n is a bijection from N to E (see picture).For finite sets A and B, if some bijection exists from A to B, then each injective or surjective function from A to B is a bijection. This is no longer true for infinite A and B. For example, the function g from N to E, defined by g(n) = 4n is injective, but not surjective, and h from N to E, defined by h(n) = n - (n mod 2) is surjective, but not injective. Neither g nor h can challenge |E| = |N|, which was established by the existence of f. Document 3::: Bijection For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold: each element of X must be paired with at least one element of Y, no element of X may be paired with more than one element of Y, each element of Y must be paired with at least one element of X, and no element of Y may be paired with more than one element of X.Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto".Bijections are sometimes denoted by a two-headed rightwards arrow with tail (U+2916 ⤖ RIGHTWARDS TWO-HEADED ARROW WITH TAIL), as in f: X ⤖ Y. This symbol is a combination of the two-headed rightwards arrow (U+21A0 ↠ RIGHTWARDS TWO HEADED ARROW), sometimes used to denote surjections, and the rightwards arrow with a barbed tail (U+21A3 ↣ RIGHTWARDS ARROW WITH TAIL), sometimes used to denote injections. Document 4::: Cardinality A has cardinality less than or equal to the cardinality of B, if there exists an injective function from A into B. Document 5::: Identity map Formally, if M is a set, the identity function f on M is defined to be a function with M as its domain and codomain, satisfying In other words, the function value f(X) in the codomain M is always the same as the input element X in the domain M. The identity function on M is clearly an injective function as well as a surjective function, so it is bijective.The identity function f on M is often denoted by idM. In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M.
epfl-collab
Let \( P(n) \) be a proposition for a positive integer \( n \) (positive integers do not include 0). You have managed to prove that \( orall k > 2, \left[ P(k-2) \wedge P(k-1) \wedge P(k) ight] ightarrow P(k+1) \). You would like to prove that \( P(n) \) is true for all positive integers. What is left for you to do ?
['None of the other statement are correct.', 'Show that \\( P(1) \\) and \\( P(2) \\) are true, then use induction to conclude that \\( P(n) \\) is true for all positive integers.', 'Show that \\( P(1) \\), \\( P(2) \\) and \\( P(3) \\) are true, then use strong induction to conclude that \\( P(n) \\) is true for all positive integers.', 'Show that \\( P(1) \\) and \\( P(2) \\) are true, then use strong induction to conclude that \\( P(n) \\) is true for all positive integers.']
C
null
Document 1::: Idoneal number A positive integer n is idoneal if and only if it cannot be written as ab + bc + ac for distinct positive integers a, b, and c.It is sufficient to consider the set { n + k2 | 3 . k2 ≤ n ∧ gcd (n, k) = 1 }; if all these numbers are of the form p, p2, 2 · p or 2s for some integer s, where p is a prime, then n is idoneal. Document 2::: Highly powerful number In elementary number theory, a highly powerful number is a positive integer that satisfies a property introduced by the Indo-Canadian mathematician Mathukumalli V. Subbarao. The set of highly powerful numbers is a proper subset of the set of powerful numbers. Define prodex(1) = 1. Let n {\displaystyle n} be a positive integer, such that n = ∏ i = 1 k p i e p i ( n ) {\displaystyle n=\prod _{i=1}^{k}p_{i}^{e_{p_{i}}(n)}} , where p 1 , … , p k {\displaystyle p_{1},\ldots ,p_{k}} are k {\displaystyle k} distinct primes in increasing order and e p i ( n ) {\displaystyle e_{p_{i}}(n)} is a positive integer for i = 1 , … , k {\displaystyle i=1,\ldots ,k} . Define prodex ⁡ ( n ) = ∏ i = 1 k e p i ( n ) {\displaystyle \operatorname {prodex} (n)=\prod _{i=1}^{k}e_{p_{i}}(n)} . (sequence A005361 in the OEIS) The positive integer n {\displaystyle n} is defined to be a highly powerful number if and only if, for every positive integer m , 1 ≤ m < n {\displaystyle m,\,1\leq m Document 3::: Proth prime A Proth number takes the form N = k 2 n + 1 {\displaystyle N=k2^{n}+1} where k and n are positive integers, k {\displaystyle k} is odd and 2 n > k {\displaystyle 2^{n}>k} . A Proth prime is a Proth number that is prime. Without the condition that 2 n > k {\displaystyle 2^{n}>k} , all odd integers larger than 1 would be Proth numbers. Document 4::: Partition number For a positive integer n, p(n) is the number of distinct ways of representing n as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct. By convention p(0) = 1, as there is one way (the empty sum) of representing zero as a sum of positive integers. Document 5::: Le Cam's theorem {\displaystyle S_{n}=X_{1}+\cdots +X_{n}.} (i.e. S n {\displaystyle S_{n}} follows a Poisson binomial distribution)Then ∑ k = 0 ∞ | Pr ( S n = k ) − λ n k e − λ n k ! | < 2 ( ∑ i = 1 n p i 2 ) .
epfl-collab
Which of the following is equivalent to \((10001)_2\) ? (Multiple answers can be correct)
['\\( (17)_{10} \\)', '\\( (101)_{4} \\)', '\\( (F0)_{16} \\)', '\\( (23)_{8} \\)']
A
null
Document 1::: Guard digit Performing this operation gives us 2 1 × 0.0001 2 {\displaystyle 2^{1}\times 0.0001_{2}} or 2 − 2 × 0.100 2 {\displaystyle 2^{-2}\times 0.100_{2}} . Without using a guard digit we have 2 1 × 0.100 2 − 2 1 × 0.011 2 {\displaystyle 2^{1}\times 0.100_{2}-2^{1}\times 0.011_{2}} , yielding 2 1 × 0.001 2 = {\displaystyle 2^{1}\times 0.001_{2}=} or 2 − 1 × 0.100 2 {\displaystyle 2^{-1}\times 0.100_{2}} . This gives us a relative error of 1. Document 2::: Dyson's transform : 58 This formulation of the transform is from Ramaré. : 700–701 Let A be a sequence of natural numbers, and x be any real number. Write A(x) for the number of elements of A which lie in . Suppose A = { a 1 < a 2 < ⋯ } {\displaystyle A=\{a_{1} Document 3::: Exponential-Golomb coding '100' has 3 bits, and 3-1 = 2. Hence add 2 zeros before '100', which is '00100' Similarly, consider 8. '8 + 1' in binary is '1001'. Document 4::: Signed-digit representation . d = q − 1 2 , d ¯ = 1 − q 2 | q = 0 } . {\displaystyle \mathbb {F} _{q}=\lbrace 0,1,{\bar {1}}=-1,...d={\frac {q-1}{2}},\ {\bar {d}}={\frac {1-q}{2}}\ |\ q=0\rbrace .} Document 5::: Signed-digit representation If the integers can be represented by the Kleene plus D + {\displaystyle {\mathcal {D}}^{+}} , then the set of all signed-digit representations of the real numbers R {\displaystyle \mathbb {R} } is given by R = D + × P × D N {\displaystyle {\mathcal {R}}={\mathcal {D}}^{+}\times {\mathcal {P}}\times {\mathcal {D}}^{\mathbb {N} }} , the Cartesian product of the Kleene plus D + {\displaystyle {\mathcal {D}}^{+}} , the set of all finite concatenated strings of digits d n … d 0 {\displaystyle d_{n}\ldots d_{0}} with at least one digit, the singleton P {\displaystyle {\mathcal {P}}} consisting of the radix point ( . {\displaystyle .} or , {\displaystyle ,} ), and the Cantor space D N {\displaystyle {\mathcal {D}}^{\mathbb {N} }} , the set of all infinite concatenated strings of digits d − 1 d − 2 … {\displaystyle d_{-1}d_{-2}\ldots } , with n ∈ N {\displaystyle n\in \mathbb {N} } . Each signed-digit representation r ∈ R {\displaystyle r\in {\mathcal {R}}} has a valuation v D: R → R {\displaystyle v_{\mathcal {D}}:{\mathcal {R}}\rightarrow \mathbb {R} } v D ( r ) = ∑ i = − ∞ n f D ( d i ) b i {\displaystyle v_{\mathcal {D}}(r)=\sum _{i=-\infty }^{n}f_{\mathcal {D}}(d_{i})b^{i}} .The infinite series always converges to a finite real number.
epfl-collab
Which sets are countable (Multiple answers can be correct) :
['\\(U-C\\) with \\(U\\) an uncountable set and \\(C\\) a countable set', 'The set of string of finite length of first names starting with the letter P', 'The set of natural numbers containing at least one 3 in their decimal representation', "The set of real numbers containing at least 100 3's in their decimal representation"]
B
null
Document 1::: Countably infinite A set S {\displaystyle S} is countable if: Its cardinality | S | {\displaystyle |S|} is less than or equal to ℵ 0 {\displaystyle \aleph _{0}} (aleph-null), the cardinality of the set of natural numbers N {\displaystyle \mathbb {N} } . There exists an injective function from S {\displaystyle S} to N {\displaystyle \mathbb {N} } . S {\displaystyle S} is empty or there exists a surjective function from N {\displaystyle \mathbb {N} } to S {\displaystyle S} . There exists a bijective mapping between S {\displaystyle S} and a subset of N {\displaystyle \mathbb {N} } . Document 2::: Uncountable set In mathematics, an uncountable set (or uncountably infinite set) is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal number is larger than that of the set of all natural numbers. Document 3::: Infinite (cardinality) In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable. Document 4::: Recursive sets In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not. A set which is not computable is called noncomputable or undecidable. A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set. Document 5::: Countably infinite S {\displaystyle S} is either finite ( | S | < ℵ 0 {\displaystyle |S|<\aleph _{0}} ) or countably infinite.All of these definitions are equivalent. A set S {\displaystyle S} is countably infinite if: Its cardinality | S | {\displaystyle |S|} is exactly ℵ 0 {\displaystyle \aleph _{0}} .
epfl-collab
What is the value of \(f(4)\) where \(f\) is defined as \(f(0) = f(1) = 1\) and \(f(n) = 2f(n - 1) + 3f(n - 2)\) for integers \(n \geq 2\)?
['45', '41', '39', '43']
B
null
Document 1::: Talk:Fibonacci sequence Likewise with k = 3, we can compute every third value Fn+6 = 4Fn+3 + Fn. With k = 4, we can compute every fourth value with Fn+8 = 7Fn+4 − Fn. —Quantling (talk | contribs) 15:01, 10 May 2023 (UTC) Apply the roots of unity filter to the GF to get the GF for the multisection. Document 2::: Fibonomial coefficient In mathematics, the Fibonomial coefficients or Fibonacci-binomial coefficients are defined as ( n k ) F = F n F n − 1 ⋯ F n − k + 1 F k F k − 1 ⋯ F 1 = n ! F k ! F ( n − k ) ! F {\displaystyle {\binom {n}{k}}_{F}={\frac {F_{n}F_{n-1}\cdots F_{n-k+1}}{F_{k}F_{k-1}\cdots F_{1}}}={\frac {n!_{F}}{k!_{F}(n-k)!_{F}}}} where n and k are non-negative integers, 0 ≤ k ≤ n, Fj is the j-th Fibonacci number and n!F is the nth Fibonorial, i.e. n ! F := ∏ i = 1 n F i , {\displaystyle {n! }_{F}:=\prod _{i=1}^{n}F_{i},} where 0!F, being the empty product, evaluates to 1. Document 3::: Binet's formula The Fibonacci numbers may be defined by the recurrence relation and for n > 1. Under some older definitions, the value F 0 = 0 {\displaystyle F_{0}=0} is omitted, so that the sequence starts with F 1 = F 2 = 1 , {\displaystyle F_{1}=F_{2}=1,} and the recurrence F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} is valid for n > 2.The first 20 Fibonacci numbers Fn are: Document 4::: Function representation , x n ) ≥ 0 {\displaystyle f(x_{1},x_{2},...,x_{n})\geq 0} belong to the object, and the points with f ( x 1 , x 2 , . . . Document 5::: Function evaluation A function is most often denoted by letters such as f, g and h, and the value of a function f at an element x of its domain is denoted by f(x); the numerical value resulting from the function evaluation at a particular input value is denoted by replacing x with this value; for example, the value of f at x = 4 is denoted by f(4). When the function is not named and is represented by an expression E, the value of the function at, say, x = 4 may be denoted by E|x=4. For example, the value at 4 of the function that maps x to ( x + 1 ) 2 {\displaystyle (x+1)^{2}} may be denoted by ( x + 1 ) 2 | x = 4 {\displaystyle \left.
epfl-collab
Which of the following are true regarding the lengths of integers in some base \(b\) (i.e., the number of digits base \(b\)) in different bases, given \(N = (FFFF)_{16}\)?
['\\((N)_4\\) is of length 12', '\\((N)_4\\) is of length 4', '\\((N)_2\\) is of length 16', '\\((N)_{10}\\) is of length 40']
C
null
Document 1::: Digit sum Let n {\displaystyle n} be a natural number. We define the digit sum for base b > 1 {\displaystyle b>1} , F b: N → N {\displaystyle F_{b}:\mathbb {N} \rightarrow \mathbb {N} } to be the following: F b ( n ) = ∑ i = 0 k d i {\displaystyle F_{b}(n)=\sum _{i=0}^{k}d_{i}} where k = ⌊ log b ⁡ n ⌋ {\displaystyle k=\lfloor \log _{b}{n}\rfloor } is one less than the number of digits in the number in base b {\displaystyle b} , and d i = n mod b i + 1 − n mod b i b i {\displaystyle d_{i}={\frac {n{\bmod {b^{i+1}}}-n{\bmod {b}}^{i}}{b^{i}}}} is the value of each digit of the number. For example, in base 10, the digit sum of 84001 is F 10 ( 84001 ) = 8 + 4 + 0 + 0 + 1 = 13. {\displaystyle F_{10}(84001)=8+4+0+0+1=13.} For any two bases 2 ≤ b 1 < b 2 {\displaystyle 2\leq b_{1} Document 2::: Signed-digit representation The set of all signed-digit representations of the integers modulo b n {\displaystyle b^{n}} , Z ∖ b n Z {\displaystyle \mathbb {Z} \backslash b^{n}\mathbb {Z} } is given by the set D n {\displaystyle {\mathcal {D}}^{n}} , the set of all finite concatenated strings of digits d n − 1 … d 0 {\displaystyle d_{n-1}\ldots d_{0}} of length n {\displaystyle n} , with n ∈ N {\displaystyle n\in \mathbb {N} } . Each signed-digit representation m ∈ D n {\displaystyle m\in {\mathcal {D}}^{n}} has a valuation v D: D n → Z / b n Z {\displaystyle v_{\mathcal {D}}:{\mathcal {D}}^{n}\rightarrow \mathbb {Z} /b^{n}\mathbb {Z} } v D ( m ) ≡ ∑ i = 0 n − 1 f D ( d i ) b i mod b n {\displaystyle v_{\mathcal {D}}(m)\equiv \sum _{i=0}^{n-1}f_{\mathcal {D}}(d_{i})b^{i}{\bmod {b}}^{n}} Document 3::: Hex digit In mathematics and computing, the hexadecimal (also base-16 or simply hex) numeral system is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9, and "A"–"F" (or alternatively "a"–"f") to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a human-friendly representation of binary-coded values. Each hexadecimal digit represents four bits (binary digits), also known as a nibble (or nybble). Document 4::: 16-bit computing A 16-bit register can store 216 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two most common representations, the range is 0 through 65,535 (216 − 1) for representation as an (unsigned) binary number, and −32,768 (−1 × 215) through 32,767 (215 − 1) for representation as two's complement. Since 216 is 65,536, a processor with 16-bit memory addresses can directly access 64 KB (65,536 bytes) of byte-addressable memory. If a system uses segmentation with 16-bit segment offsets, more can be accessed. Document 5::: Hex format In mathematics and computing, the hexadecimal (also base-16 or simply hex) numeral system is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9, and "A"–"F" (or alternatively "a"–"f") to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a human-friendly representation of binary-coded values. Each hexadecimal digit represents four bits (binary digits), also known as a nibble (or nybble).
epfl-collab
In a lottery, a bucket of 10 numbered red balls and a bucket of 5 numbered green balls are used. Three red balls and two green balls are drawn (without replacement). What is the probability to win the lottery? (The order in which balls are drawn does not matter).
['$$\x0crac{1}{1900}$$', '$$\x0crac{1}{1200}$$', '$$\x0crac{1}{7200}$$', '$$\x0crac{1}{14400}$$']
B
null
Document 1::: Lottery mathematics Lottery mathematics is used to calculate probabilities of winning or losing a lottery game. It is based primarily on combinatorics, particularly the twelvefold way and combinations without replacement. Document 2::: Randomness For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. Document 3::: Probability spaces A probability function, P {\displaystyle P} , which assigns each event in the event space a probability, which is a number between 0 and 1.In order to provide a sensible model of probability, these elements must satisfy a number of axioms, detailed in this article. In the example of the throw of a standard die, we would take the sample space to be { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle \{1,2,3,4,5,6\}} . For the event space, we could simply use the set of all subsets of the sample space, which would then contain simple events such as { 5 } {\displaystyle \{5\}} ("the die lands on 5"), as well as complex events such as { 2 , 4 , 6 } {\displaystyle \{2,4,6\}} ("the die lands on an even number"). Document 4::: Urn problem In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. A number of important variations are described below. An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems. Document 5::: Win probability Win probability is a statistical tool which suggests a sports team's chances of winning at any given point in a game, based on the performance of historical teams in the same situation. The art of estimating win probability involves choosing which pieces of context matter. Baseball win probability estimates often include whether a team is home or away, inning, number of outs, which bases are occupied, and the score difference. Because baseball proceeds batter by batter, each new batter introduces a discrete state.