source_dataset
stringclasses
1 value
question
stringlengths
6
1.87k
choices
stringlengths
20
1.02k
answer
stringclasses
4 values
rationale
float64
documents
stringlengths
1.01k
5.9k
epfl-collab
Tick the \emph{false} assertion concerning WPA-TKIP.
['WPA-TKIP avoids replay attacks using a counter.', 'WPA-TKIP provides much more confidentiality than WEP.', 'WPA-TKIP uses a fixed RC4 key.', "WPA-TKIP doesn't protect well the integrity of the messages."]
C
null
Document 1::: KRACK KRACK ("Key Reinstallation Attack") is a replay attack (a type of exploitable flaw) on the Wi-Fi Protected Access protocol that secures Wi-Fi connections. It was discovered in 2016 by the Belgian researchers Mathy Vanhoef and Frank Piessens of the University of Leuven. Vanhoef's research group published details of the attack in October 2017. By repeatedly resetting the nonce transmitted in the third step of the WPA2 handshake, an attacker can gradually match encrypted packets seen before and learn the full keychain used to encrypt the traffic. Document 2::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 3::: Aircrack-ng Aircrack-ng is a network software suite consisting of a detector, packet sniffer, WEP and WPA/WPA2-PSK cracker and analysis tool for 802.11 wireless LANs. It works with any wireless network interface controller whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic. Packages are released for Linux and Windows.Aircrack-ng is a fork of the original Aircrack project. It can be found as a preinstalled tool in many security-focused Linux distributions such as Kali Linux or Parrot Security OS, which share common attributes as they are developed under the same project (Debian). Document 4::: KRACK The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable. The vulnerability affects all major software platforms, including Microsoft Windows, macOS, iOS, Android, Linux, OpenBSD and others.The widely used open-source implementation wpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zeros encryption key, effectively nullifying WPA2 protection in a man-in-the-middle attack. Version 2.7 fixed this vulnerability.The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept sent and received data. Document 5::: KisMAC Cracking of WEP and WPA keys, both by brute force, and exploiting flaws such as weak scheduling and badly generated keys is supported when a card capable of monitor mode is used, and packet reinjection can be done with a supported card (Prism2 and some Ralink cards). GPS mapping can be performed when an NMEA compatible GPS receiver is attached.Kismac2 is a fork of the original software with a new GUI, new features and that works for OS X 10.7 - 10.10, 64-bit only. It is no longer maintained. Data can also be saved in pcap format and loaded into programs such as Wireshark.
epfl-collab
Tick the \emph{correct} assertion. In ElGamal $\ldots$
['the encryption algorithm is deterministic.', 'the key recovery problem is equivalent to the Computational Diffie Hellman problem.', 'the decryption problem can be hard even if the discrete logarithm is easy to compute in the underlying group.', 'the size of the ciphertext is always bigger than the size of the corresponding plaintext.']
D
null
Document 1::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 2::: If and only if The truth table of P ⇔ {\displaystyle \Leftrightarrow } Q is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. Document 3::: Correctness (computer science) In particular it is not expected to be a correctness assertion for a given program implementing the algorithm on a given machine. That would involve such considerations as limitations on computer memory. A deep result in proof theory, the Curry–Howard correspondence, states that a proof of functional correctness in constructive logic corresponds to a certain program in the lambda calculus. Document 4::: Judgment (mathematical logic) In mathematical logic, a judgment (or judgement) or assertion is a statement or enunciation in a metalanguage. For example, typical judgments in first-order logic would be that a string is a well-formed formula, or that a proposition is true. Similarly, a judgment may assert the occurrence of a free variable in an expression of the object language, or the provability of a proposition. In general, a judgment may be any inductively definable assertion in the metatheory. Document 5::: Correctness (computer science) For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number—it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory. A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally.
epfl-collab
One-time pad ...
['pads the message at least once before encryption.', 'never uses a key $K$ which is picked from a uniform distribution.', 'uses an invertible group operation such as ``$\\oplus$" for encryption.', 'allows an efficient key management.']
C
null
Document 1::: Blinding (cryptography) The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a secret key or OTP that she shares with Bob. Document 2::: Packet Assembler/Disassembler A packet assembler/disassembler, abbreviated PAD is a communications device which provides multiple asynchronous terminal connectivity to an X.25 (packet-switching) network or host computer. It collects data from a group of terminals and places the data into X.25 packets (assembly). A PAD also does the reverse, it takes data packets from packet-switching network or host computer and returns them into a character stream that can be sent to the terminals (disassembly). A Frame Relay assembler/disassembler (FRAD) is a similar device for accessing Frame Relay networks. Document 3::: Random number table If carefully prepared, the filtering and testing processes remove any noticeable bias or asymmetry from the hardware-generated original numbers so that such tables provide the most "reliable" random numbers available to the casual user. Note that any published (or otherwise accessible) random data table is unsuitable for cryptographic purposes since the accessibility of the numbers makes them effectively predictable, and hence their effect on a cryptosystem is also predictable. By way of contrast, genuinely random numbers that are only accessible to the intended encoder and decoder allow literally unbreakable encryption of a similar or lesser amount of meaningful data (using a simple exclusive OR operation) in a method known as the one-time pad, which has often insurmountable problems that are barriers to implementing this method correctly. Document 4::: Pad Abort Test 1 Pad Abort Test 1 was the first abort test of the Apollo spacecraft on November 7, 1963. Document 5::: Korg PadKontrol The Korg PadKontrol was a USB MIDI controller manufactured by Korg. The PadKontrol was released in 2005 as a competitor to the Akai MPD and the M-Audio Triggerfinger. The PadKontrol has sixteen assignable, velocity sensitive pads, with sixteen "scenes" which allow the user to toggle between various pad configurations, and an assignable X-Y pad for drum rolls, flams, or controller input inside a VSTi or a MIDI sequencer.
epfl-collab
The Merkle-D{\aa}mgard construction is
['a method which constructs a compression function from a block cipher.', 'a method which constructs a block cipher function from a hash function.', 'a method which iterates a compression function to obtain a hash function.', 'a method which iterates a hash function to obtain a compression function.']
C
null
Document 1::: Ralph Merkle puzzle cryptographic system In cryptography, Merkle's Puzzles is an early construction for a public-key cryptosystem, a protocol devised by Ralph Merkle in 1974 and published in 1978. It allows two parties to agree on a shared secret by exchanging messages, even if they have no secrets in common beforehand. Document 2::: Merkle tree Conversely, in a hash list, the number is proportional to the number of leaf nodes itself. A Merkle tree is therefore an efficient example of a cryptographic commitment scheme, in which the root of the tree is seen as a commitment and leaf nodes may be revealed and proven to be part of the original commitment. The concept of a hash tree is named after Ralph Merkle, who patented it in 1979. Document 3::: Merkle tree In cryptography and computer science, a hash tree or Merkle tree is a tree in which every "leaf" (node) is labelled with the cryptographic hash of a data block, and every node that is not a leaf (called a branch, inner node, or inode) is labelled with the cryptographic hash of the labels of its child nodes. A hash tree allows efficient and secure verification of the contents of a large data structure. A hash tree is a generalization of a hash list and a hash chain. Demonstrating that a leaf node is a part of a given binary hash tree requires computing a number of hashes proportional to the logarithm of the number of leaf nodes in the tree. Document 4::: Mersenne Twister The Mersenne Twister is a general-purpose pseudorandom number generator (PRNG) developed in 1997 by Makoto Matsumoto (松本 眞) and Takuji Nishimura (西村 拓士). Its name derives from the fact that its period length is chosen to be a Mersenne prime. The Mersenne Twister was designed specifically to rectify most of the flaws found in older PRNGs. The most commonly used version of the Mersenne Twister algorithm is based on the Mersenne prime 2 19937 − 1 {\displaystyle 2^{19937}-1} . The standard implementation of that, MT19937, uses a 32-bit word length. There is another implementation (with five variants) that uses a 64-bit word length, MT19937-64; it generates a different sequence. Document 5::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,}
epfl-collab
The Fermat Test outputs `maybe prime' with probability which may be high given though $n$ is composite when ...
['$n$ is a Fermat number.', '$n$ is an even composite.', '$n$ is the multiplication of two primes.', '$n$ is a Carmichael number.']
D
null
Document 1::: Fermat pseudoprime Fermat's little theorem states that if p is prime and a is coprime to p, then ap−1 − 1 is divisible by p. For an integer a > 1, if a composite integer x divides ax−1 − 1, then x is called a Fermat pseudoprime to base a.: Def. 3.32 In other words, a composite integer is a Fermat pseudoprime to base a if it successfully passes the Fermat primality test for the base a. The false statement that all numbers that pass the Fermat primality test for base 2, are prime, is called the Chinese hypothesis. The smallest base-2 Fermat pseudoprime is 341. It is not a prime, since it equals 11·31, but it satisfies Fermat's little theorem: 2340 ≡ 1 (mod 341) and thus passes the Fermat primality test for the base 2. Document 2::: Great Internet Mersenne Prime Search In 2018, GIMPS adopted the Fermat primality test as an alternative option for primality testing, while keeping the Lucas-Lehmer test as a double-check for Mersenne numbers detected as probable primes by the Fermat test. (While the Lucas-Lehmer test is deterministic and the Fermat test is only probabilistic, the probability of the Fermat test finding a Fermat pseudoprime that is not prime is vastly lower than the error rate of the Lucas-Lehmer test due to computer hardware errors.) In September 2020, GIMPS began to support primality proofs based on verifiable delay functions. Document 3::: Euler pseudoprime The equation can be tested rather quickly, which can be used for probabilistic primality testing. These tests are twice as strong as tests based on Fermat's little theorem. Every Euler pseudoprime is also a Fermat pseudoprime. It is not possible to produce a definite test of primality based on whether a number is an Euler pseudoprime because there exist absolute Euler pseudoprimes, numbers which are Euler pseudoprimes to every base relatively prime to themselves. The absolute Euler pseudoprimes are a subset of the absolute Fermat pseudoprimes, or Carmichael numbers, and the smallest absolute Euler pseudoprime is 1729 = 7×13×19. Document 4::: Euler pseudoprime In arithmetic, an odd composite integer n is called an Euler pseudoprime to base a, if a and n are coprime, and a ( n − 1 ) / 2 ≡ ± 1 ( mod n ) {\displaystyle a^{(n-1)/2}\equiv \pm 1{\pmod {n}}} (where mod refers to the modulo operation). The motivation for this definition is the fact that all prime numbers p satisfy the above equation which can be deduced from Fermat's little theorem. Fermat's theorem asserts that if p is prime, and coprime to a, then ap−1 ≡ 1 (mod p). Suppose that p>2 is prime, then p can be expressed as 2q + 1 where q is an integer. Document 5::: Composite numbers Likewise, the integers 2 and 3 are not composite numbers because each of them can only be divided by one and itself. The composite numbers up to 150 are: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 44, 45, 46, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 60, 62, 63, 64, 65, 66, 68, 69, 70, 72, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 102, 104, 105, 106, 108, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 132, 133, 134, 135, 136, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 150. (sequence A002808 in the OEIS)Every composite number can be written as the product of two or more (not necessarily distinct) primes. For example, the composite number 299 can be written as 13 × 23, and the composite number 360 can be written as 23 × 32 × 5; furthermore, this representation is unique up to the order of the factors. This fact is called the fundamental theorem of arithmetic.There are several known primality tests that can determine whether a number is prime or composite, without necessarily revealing the factorization of a composite input.
epfl-collab
What should the minimal length of the output of a hash function be to provide security against \emph{collision attacks} of $2^{256}?$
['$512$ bits.', '$2^{512}$ bits.', '$2^{256}$ bits.', '$256$ bits.']
A
null
Document 1::: Minimal perfect hash function For frequently changing S dynamic perfect hash functions may be used at the cost of additional space. The space requirement to store the perfect hash function is in O(n). The important performance parameters for perfect hash functions are the evaluation time, which should be constant, the construction time, and the representation size. Document 2::: Minimal perfect hash function In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. Document 3::: Length extension attack In cryptography and computer security, a length extension attack is a type of attack where an attacker can use Hash(message1) and the length of message1 to calculate Hash(message1 ‖ message2) for an attacker-controlled message2, without needing to know the content of message1. This is problematic when the hash is used as a message authentication code with construction Hash(secret ‖ message), and message and the length of secret is known, because an attacker can include extra information at the end of the message and produce a valid hash without knowing the secret. Algorithms like MD5, SHA-1 and most of SHA-2 that are based on the Merkle–Damgård construction are susceptible to this kind of attack. Truncated versions of SHA-2, including SHA-384 and SHA-512/256 are not susceptible, nor is the SHA-3 algorithm.HMAC also uses a different construction and so is not vulnerable to length extension attacks. Document 4::: Hash collision In computer science, a hash collision or hash clash is when two pieces of data in a hash table share the same hash value. The hash value in this case is derived from a hash function which takes a data input and returns a fixed length of bits.Although hash algorithms have been created with the intent of being collision resistant, they can still sometimes map different data to the same hash (by virtue of the pigeonhole principle). Malicious users can take advantage of this to mimic, access, or alter data.Due to the possible negative applications of hash collisions in data management and computer security (in particular, cryptographic hash functions), collision avoidance has become an important topic in computer security. Document 5::: Secure Hash Algorithms They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
epfl-collab
Let $G$ be a group generated by $g$. What is the discrete logarithm problem?
["find $x,x'$ such that $g^x=g^{x'}$ and $x\\ne x'$.", 'find $y$ such that $g^x=y$ for a given $x$.', 'find $x$ such that $g^x=y$ for a given $y$.', 'find $x,y$ such that $g^x=y$.']
C
null
Document 1::: Discrete log problem Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times: b k = b ⋅ b ⋯ b ⏟ k factors . Document 2::: Discrete Logarithm In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. Several important algorithms in public-key cryptography, such as ElGamal, base their security on the assumption that the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution. Document 3::: Discrete log problem {\displaystyle b^{k}=\underbrace {b\cdot b\cdots b} _{k\;{\text{factors}}}.} Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: b0 = 1. Let a also be an element of G. An integer k that solves the equation bk = a is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a. Document 4::: P-group generation algorithm In mathematics, specifically group theory, finite groups of prime power order p n {\displaystyle p^{n}} , for a fixed prime number p {\displaystyle p} and varying integer exponents n ≥ 0 {\displaystyle n\geq 0} , are briefly called finite p-groups. The p-group generation algorithm by M. F. Newman and E. A. O'Brien is a recursive process for constructing the descendant tree of an assigned finite p-group which is taken as the root of the tree. Document 5::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,}
epfl-collab
Bluetooth is \dots
['\\emph{not} designed to transmit data.', 'first introduced by vikings.', 'a short-range wireless technology.', 'a long-range wireless technology.']
C
null
Document 1::: Bluetooth Basic Rate/Enhanced Data Rate Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to 10 metres (33 ft). It employs UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz. Document 2::: Bluetooth Basic Rate/Enhanced Data Rate It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. Document 3::: Bluetooth Basic Rate/Enhanced Data Rate The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. As of 2021, 4.7 billion Bluetooth integrated circuit chips are shipped annually. Document 4::: Piconet A piconet is an ad hoc network that links a wireless user group of devices using Bluetooth technology protocols. A piconet consists of two or more devices occupying the same physical channel (synchronized to a common clock and hopping sequence). It allows one master device to interconnect with up to seven active slave devices. Up to 255 further slave devices can be inactive, or parked, which the master device can bring into active status at any time, but an active station must go into parked first. Some examples of piconets include a cell phone connected to a computer, a laptop and a Bluetooth-enabled digital camera, or several PDAs that are connected to each other. Document 5::: IBeacon iBeacon is a protocol developed by Apple and introduced at the Apple Worldwide Developers Conference in 2013. Various vendors have since made iBeacon-compatible hardware transmitters – typically called beacons – a class of Bluetooth Low Energy (BLE) devices that broadcast their identifier to nearby portable electronic devices. The technology enables smartphones, tablets and other devices to perform actions when in proximity to an iBeacon.iBeacon is based on Bluetooth low energy proximity sensing by transmitting a universally unique identifier picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification.
epfl-collab
Tick the \emph{false} answer. In a group, the operation\dots
['is associative.', 'has a neutral element.', 'is commutative', 'associates an inverse to each value.']
C
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: False position In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations. Document 3::: Truth table A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—which sets out the functional values of logical expressions on each of their functional arguments, that is, for each combination of values taken by their logical variables. In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is, logically valid. A truth table has one column for each input variable (for example, A and B), and one final column showing all of the possible results of the logical operation that the table represents (for example, A XOR B). Each row of the truth table contains one possible configuration of the input variables (for instance, A=true, B=false), and the result of the operation for those values. Document 4::: False position As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation x + x/4 = 15. This is solved by false position. First, guess that x = 4 to obtain, on the left, 4 + 4/4 = 5. Document 5::: Algebraic decision diagram An algebraic decision diagram (ADD) or a multi-terminal binary decision diagram (MTBDD), is a data structure that is used to symbolically represent a Boolean function whose codomain is an arbitrary finite set S. An ADD is an extension of a reduced ordered binary decision diagram, or commonly named binary decision diagram (BDD) in the literature, which terminal nodes are not restricted to the Boolean values 0 (FALSE) and 1 (TRUE). The terminal nodes may take any value from a set of constants S.
epfl-collab
Consider a public-key cryptosystem. Let $K_p$, $K_s$, $X$, and $Y$ be respectively the public key, private key, plaintext and ciphertext. Which assertion is \emph{always true}?
['$Dec_{K_s}(Enc_{K_p}(X))=X$', '$Enc_{K_p}(Dec_{K_s}(X))=X$', '$Dec_{K_p}(Enc_{K_s}(Y))=Y$', '$Enc_{K_s}(Dec_{K_p}(Y))=Y$']
A
null
Document 1::: Kerckhoffs's principle Kerckhoffs's principle (also called Kerckhoffs's desideratum, assumption, axiom, doctrine or law) of cryptography was stated by Dutch-born cryptographer Auguste Kerckhoffs in the 19th century. The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. This concept is widely embraced by cryptographers, in contrast to security through obscurity, which is not. Kerckhoffs's principle was phrased by American mathematician Claude Shannon as "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". Document 2::: Indistinguishability (cryptography) Ciphertext indistinguishability is a property of many encryption schemes. Intuitively, if a cryptosystem possesses the property of indistinguishability, then an adversary will be unable to distinguish pairs of ciphertexts based on the message they encrypt. The property of indistinguishability under chosen plaintext attack is considered a basic requirement for most provably secure public key cryptosystems, though some schemes also provide indistinguishability under chosen ciphertext attack and adaptive chosen ciphertext attack. Indistinguishability under chosen plaintext attack is equivalent to the property of semantic security, and many cryptographic proofs use these definitions interchangeably. Document 3::: Security protocol notation A key with one subscript, KA, is the public key of the corresponding individual. A private key is represented as the inverse of the public key. The notation specifies only the operation and not its semantics — for instance, private key encryption and signature are represented identically. Document 4::: Cryptosystem In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Document 5::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } .
epfl-collab
Select the \emph{incorrect} statement. Euler Theorem
['allows us to prove that RSA decryption works.', 'is a generalization of Little Fermat Theorem.', 'gives the basis for polynomial time factoring.', 'states that any $x \\in \\{0, \\dots, N-1 \\}$ and any $k$, we have $x^{k\\varphi(N)+1}=x \\pmod N$, where $N=pq$ for $p$,$q$ distinct primes.']
C
null
Document 1::: Euler calculus Euler calculus is a methodology from applied algebraic topology and integral geometry that integrates constructible functions and more recently definable functions by integrating with respect to the Euler characteristic as a finitely-additive measure. In the presence of a metric, it can be extended to continuous integrands via the Gauss–Bonnet theorem. It was introduced independently by Pierre Schapira and Oleg Viro in 1988, and is useful for enumeration problems in computational geometry and sensor networks. Document 2::: Euler's line In geometry, the Euler line, named after Leonhard Euler (), is a line determined from any triangle that is not equilateral. It is a central line of the triangle, and it passes through several important points determined from the triangle, including the orthocenter, the circumcenter, the centroid, the Exeter point and the center of the nine-point circle of the triangle.The concept of a triangle's Euler line extends to the Euler line of other shapes, such as the quadrilateral and the tetrahedron. Document 3::: Eastin-Knill theorem The Eastin–Knill theorem is a no-go theorem that states: "No quantum error correcting code can have a continuous symmetry which acts transversely on physical qubits". In other words, no quantum error correcting code can transversely implement a universal gate set. Since quantum computers are inherently noisy, quantum error correcting codes are used to correct errors that affect information due to decoherence. Decoding error corrected data in order to perform gates on the qubits makes it prone to errors. Document 4::: Euler (software) Euler (now Euler Mathematical Toolbox or EuMathT) is a free and open-source numerical software package. It contains a matrix language, a graphical notebook style interface, and a plot window. Euler is designed for higher level math such as calculus, optimization, and statistics. Document 5::: Euler numbers In mathematics, the Euler numbers are a sequence En of integers (sequence A122045 in the OEIS) defined by the Taylor series expansion 1 cosh ⁡ t = 2 e t + e − t = ∑ n = 0 ∞ E n n ! ⋅ t n {\displaystyle {\frac {1}{\cosh t}}={\frac {2}{e^{t}+e^{-t}}}=\sum _{n=0}^{\infty }{\frac {E_{n}}{n! }}\cdot t^{n}} ,where cosh ⁡ ( t ) {\displaystyle \cosh(t)} is the hyperbolic cosine function.
epfl-collab
Tick the \textit{correct} assertion.
['The set of quadratic residues in $\\mathbb{Z}_n$ is a field.', 'In a finite field $K$, every element has exactly two square roots.', 'In a finite field $K$, 1 has exactly one square roots and it is 1.', 'An element can have more than two square roots in $\\mathbb{Z}_n$.']
D
null
Document 1::: Test assertion In computer software testing, a test assertion is an expression which encapsulates some testable logic specified about a target under test. The expression is formally presented as an assertion, along with some form of identifier, to help testers and engineers ensure that tests of the target relate properly and clearly to the corresponding specified statements about the target. Usually the logic for each test assertion is limited to one single aspect specified. A test assertion may include prerequisites which must be true for the test assertion to be valid. Document 2::: Mathematically Correct Mathematically Correct was a U.S.-based website created by educators, parents, mathematicians, and scientists who were concerned about the direction of reform mathematics curricula based on NCTM standards. Created in 1997, it was a frequently cited website in the so-called Math wars, and was actively updated until 2003. Document 3::: Kinetic proofreading Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways.Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further. Document 4::: Assertion (software development) In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Document 5::: Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (German: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems.
epfl-collab
Let $p$ be a prime number and $n$ be an integer. What is the order of $\mathrm{GF}(p^n)$?
['$1-p^n$', '$p^n-1$', '$p^n$', '$p^{n-1}$']
C
null
Document 1::: Number Field Sieve In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than 10100. Heuristically, its complexity for factoring an integer n (consisting of ⌊log2 n⌋ + 1 bits) is of the form exp ⁡ ( ( ( 64 / 9 ) 1 / 3 + o ( 1 ) ) ( log ⁡ n ) 1 / 3 ( log ⁡ log ⁡ n ) 2 / 3 ) = L n {\displaystyle \exp \left(\left((64/9)^{1/3}+o(1)\right)\left(\log n\right)^{1/3}\left(\log \log n\right)^{2/3}\right)=L_{n}\left} in O and L-notations. It is a generalization of the special number field sieve: while the latter can only factor numbers of a certain special form, the general number field sieve can factor any number apart from prime powers (which are trivial to factor by taking roots). Document 2::: Galois fields The order of a finite field is its number of elements, which is either a prime number or a prime power. For every prime number p and every positive integer k there are fields of order pk, all of which are isomorphic. Finite fields are fundamental in a number of areas of mathematics and computer science, including number theory, algebraic geometry, Galois theory, finite geometry, cryptography and coding theory. Document 3::: Multiplicative order In number theory, given a positive integer n and an integer a coprime to n, the multiplicative order of a modulo n is the smallest positive integer k such that a k ≡ 1 ( mod n ) {\textstyle a^{k}\ \equiv \ 1{\pmod {n}}} .In other words, the multiplicative order of a modulo n is the order of a in the multiplicative group of the units in the ring of the integers modulo n. The order of a modulo n is sometimes written as ord n ⁡ ( a ) {\displaystyle \operatorname {ord} _{n}(a)} . Document 4::: Supersingular prime (moonshine theory) In the mathematical branch of moonshine theory, a supersingular prime is a prime number that divides the order of the Monster group M, which is the largest sporadic simple group. There are precisely fifteen supersingular prime numbers: the first eleven primes (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, and 31), as well as 41, 47, 59, and 71. (sequence A002267 in the OEIS) The non-supersingular primes are 37, 43, 53, 61, 67, and any prime number greater than or equal to 73. Supersingular primes are related to the notion of supersingular elliptic curves as follows. Document 5::: Primitive element (finite field) In field theory, a primitive element of a finite field GF(q) is a generator of the multiplicative group of the field. In other words, α ∈ GF(q) is called a primitive element if it is a primitive (q − 1)th root of unity in GF(q); this means that each non-zero element of GF(q) can be written as αi for some integer i. If q is a prime number, the elements of GF(q) can be identified with the integers modulo q. In this case, a primitive element is also called a primitive root modulo q. For example, 2 is a primitive element of the field GF(3) and GF(5), but not of GF(7) since it generates the cyclic subgroup {2, 4, 1} of order 3; however, 3 is a primitive element of GF(7). The minimal polynomial of a primitive element is a primitive polynomial.
epfl-collab
Under which condition is an element $x\in \mathbb{Z}_n$ invertible?
['$\\mathsf{gcd}(x,n) \\ne 1$.', '$\\mathsf{gcd}(x,n) = 1$.', '$\\mathsf{gcd}(x,\\varphi (n)) = 1$.', '$\\mathsf{gcd}(x,n-1) = 1$.']
B
null
Document 1::: Invertible element An element is invertible under an operation if it has a left inverse and a right inverse. In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, if l and r are respectively a left inverse and a right inverse of x, then l = l ∗ ( x ∗ r ) = ( l ∗ x ) ∗ r = r . {\displaystyle l=l*(x*r)=(l*x)*r=r.} Document 2::: Invertible element For example, consider the functions from the integers to the integers. The doubling function x ↦ 2 x {\displaystyle x\mapsto 2x} has infinitely many left inverses under function composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Document 3::: Quadratic Jordan algebra An element a is invertible if Q(a) is invertible and there exists b such that Q(b) is the inverse of Q(a) and Q(a)b = a: such b is unique and we say that b is the inverse of a. A Jordan division algebra is one in which every non-zero element is invertible. Document 4::: Invertible element The inverse of an invertible element is its unique left or right inverse. If the operation is denoted as an addition, the inverse, or additive inverse, of an element x is denoted − x . {\displaystyle -x.} Document 5::: Invertible element Otherwise, the inverse of x is generally denoted x − 1 , {\displaystyle x^{-1},} or, in the case of a commutative multiplication 1 x . {\textstyle {\frac {1}{x}}.} When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as in x ∗ − 1 .
epfl-collab
If Alice receives a message proven to be coming from Bob, we say that the message is\dots
['fresh', 'correct', 'authenticated', 'confidential']
C
null
Document 1::: Security protocol notation In cryptography, security (engineering) protocol notation, also known as protocol narrations and Alice & Bob notation, is a way of expressing a protocol of correspondence between entities of a dynamic system, such as a computer network. In the context of a formal model, it allows reasoning about the properties of such a system. The standard notation consists of a set of principals (traditionally named Alice, Bob, Charlie, and so on) who wish to communicate. They may have access to a server S, shared keys K, timestamps T, and can generate nonces N for authentication purposes. Document 2::: Blinding (cryptography) Alice "blinds" the message by encoding it into some other input E(x); the encoding E must be a bijection on the input space of f, ideally a random permutation. Oscar gives her f(E(x)), to which she applies a decoding D to obtain D(f(E(x))) = y. Not all functions allow for blind computation. Document 3::: Security protocol notation A simple example might be the following: A → B: { X } K A , B {\displaystyle A\rightarrow B:\{X\}_{K_{A,B}}} This states that Alice intends a message for Bob consisting of a plaintext X encrypted under shared key KA,B. Another example might be the following: B → A: { N B } K A {\displaystyle B\rightarrow A:\{N_{B}\}_{K_{A}}} This states that Bob intends a message for Alice consisting of a nonce NB encrypted using public key of Alice. A key with two subscripts, KA,B, is a symmetric key shared by the two corresponding individuals. Document 4::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 5::: Morse Message (1962) The Morse Message was a series of brief radio messages in Morse code that were transmitted from the Evpatoria Planetary Radar (EPR) complex and directed to the planet Venus in 1962.The message consisted of three words, all encoded in Morse code: the word “Mir” (Russian: “Мир”, meaning both “peace” and “world”) was transmitted from the EPR on November 19, 1962, and the words “Lenin” (Russian: “Ленин”) and “USSR” (Russian: “СССР”, the abbreviation for the Soviet Union — Russian: Сою́з Сове́тских Социалисти́ческих Респу́блик, Soyúz Soviétskikh Sotsialistícheskikh Respúblik) were transmitted on November 24, 1962. In Russian, the Morse Message is referred to as the Radio Message “MIR, LENIN, USSR”. The message was the first radio broadcast intended for extraterrestrial civilizations in the history of mankind.
epfl-collab
Which cryptographic primitive(s) is (are) used in S/Key - OTP ?
['Only encryption and a MAC algorithm', 'Only a MAC', 'Only a hash function', 'Only encryption and a hash function']
C
null
Document 1::: Cryptographic primitive Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions. Document 2::: Blinding (cryptography) The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a secret key or OTP that she shares with Bob. Document 3::: Cryptographic key types A cryptographic key is a string of data that is used to lock or unlock cryptographic functions, including authentication, authorization and encryption. Cryptographic keys are grouped into cryptographic key types according to the functions they perform. Document 4::: Hybrid cryptosystem This is addressed by hybrid systems by using a combination of both.A hybrid cryptosystem can be constructed using any two separate cryptosystems: a key encapsulation mechanism, which is a public-key cryptosystem a data encapsulation scheme, which is a symmetric-key cryptosystemThe hybrid cryptosystem is itself a public-key system, whose public and private keys are the same as in the key encapsulation scheme.Note that for very long messages the bulk of the work in encryption/decryption is done by the more efficient symmetric-key scheme, while the inefficient public-key scheme is used only to encrypt/decrypt a short key value.All practical implementations of public key cryptography today employ the use of a hybrid system. Examples include the TLS protocol and the SSH protocol, that use a public-key mechanism for key exchange (such as Diffie-Hellman) and a symmetric-key mechanism for data encapsulation (such as AES). The OpenPGP file format and the PKCS#7 file format are other examples. Document 5::: Cryptographic Message Syntax The architecture of CMS is built around certificate-based key management, such as the profile defined by the PKIX working group. CMS is used as the key cryptographic component of many other cryptographic standards, such as S/MIME, PKCS #12 and the RFC 3161 digital timestamping protocol. OpenSSL is open source software that can encrypt, decrypt, sign and verify, compress and uncompress CMS documents, using the openssl-cms command.
epfl-collab
Let $(e,N)$ be the public parameters of the RSA cryptosystem. What is the advantage of taking a \emph{small} value for $e$?
['The complexity of the encryption step is smaller.', 'The complexity of the parameters generation is smaller.', 'The whole system is stronger against several attacks.', 'The complexity of the decryption step is smaller.']
A
null
Document 1::: RSA Cryptosystem An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers.The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Document 2::: RSA Cryptosystem RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys for symmetric-key cryptography, which are then used for bulk encryption–decryption. Document 3::: Blinding (cryptography) Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks. For example, in RSA blinding involves computing the blinding operation E(x) = (xr)e mod N, where r is a random integer between 1 and N and relatively prime to N (i.e. gcd(r, N) = 1), x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function f(z) = zd mod N is applied thus giving f(E(x)) = (xr)ed mod N = xr mod N. Finally it is unblinded using the function D(z) = zr−1 mod N. Multiplying xr mod N by r−1 mod N yields x, as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives. Document 4::: Factoring integers The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure. Document 5::: Montgomery modular multiplication The need to convert a and b into Montgomery form and their product out of Montgomery form means that computing a single product by Montgomery multiplication is slower than the conventional or Barrett reduction algorithms. However, when performing many multiplications in a row, as in modular exponentiation, intermediate results can be left in Montgomery form. Then the initial and final conversions become a negligible fraction of the overall computation. Many important cryptosystems such as RSA and Diffie–Hellman key exchange are based on arithmetic operations modulo a large odd number, and for these cryptosystems, computations using Montgomery multiplication with R a power of two are faster than the available alternatives.
epfl-collab
Let $p$ and $q$ be two distinct prime numbers and let $x \in \mathbf{Z}_{pq}^*$. Which of the following assertion is always true in $\mathbf{Z}_{pq}^*$?
['$x^{(p-1)(q-1)} = 1$', '$x^{pq} = 1$', '$x^{p} = 1$', '$x^{q} = 1$']
A
null
Document 1::: Prime element Interest in prime elements comes from the fundamental theorem of arithmetic, which asserts that each nonzero integer can be written in essentially only one way as 1 or −1 multiplied by a product of positive prime numbers. This led to the study of unique factorization domains, which generalize what was just illustrated in the integers. Being prime is relative to which ring an element is considered to be in; for example, 2 is a prime element in Z but it is not in Z, the ring of Gaussian integers, since 2 = (1 + i)(1 − i) and 2 does not divide any factor on the right. Document 2::: Prime element An element p of a commutative ring R is said to be prime if it is not the zero element or a unit and whenever p divides ab for some a and b in R, then p divides a or p divides b. With this definition, Euclid's lemma is the assertion that prime numbers are prime elements in the ring of integers. Equivalently, an element p is prime if, and only if, the principal ideal (p) generated by p is a nonzero prime ideal. (Note that in an integral domain, the ideal (0) is a prime ideal, but 0 is an exception in the definition of 'prime element'.) Document 3::: Supersingular prime (algebraic number theory) In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp. Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). Document 4::: Quadratic pair In mathematical finite group theory, a quadratic pair for the odd prime p, introduced by Thompson (1971), is a finite group G together with a quadratic module, a faithful representation M on a vector space over the finite field with p elements such that G is generated by elements with minimal polynomial (x − 1)2. Thompson classified the quadratic pairs for p ≥ 5. Chermak (2004) classified the quadratic pairs for p = 3. With a few exceptions, especially for p = 3, groups with a quadratic pair for the prime p tend to be more or less groups of Lie type in characteristic p. Document 5::: Wieferich pair In mathematics, a Wieferich pair is a pair of prime numbers p and q that satisfy pq − 1 ≡ 1 (mod q2) and qp − 1 ≡ 1 (mod p2)Wieferich pairs are named after German mathematician Arthur Wieferich. Wieferich pairs play an important role in Preda Mihăilescu's 2002 proof of Mihăilescu's theorem (formerly known as Catalan's conjecture).
epfl-collab
Let $h$ be a cryptographic hash function based on the Merkle-Damg{\aa}rd scheme. The Merkle-Damg{\aa}rd Theorem states that\dots
['\\dots if the compression function is collision-resistant, then $h$ is collision-resistant.', '\\dots $h$ is resistant to a first preimage attack.', '\\dots if $h$ is collision-resistant, then the compression function is collision-resistant.', '\\dots $h$ is collision-resistant.']
A
null
Document 1::: Hash chain A hash chain is a successive application of a cryptographic hash function h {\displaystyle h} to a string x {\displaystyle x} . For example, h ( h ( h ( h ( x ) ) ) ) {\displaystyle h(h(h(h(x))))} gives a hash chain of length 4, often denoted h 4 ( x ) {\displaystyle h^{4}(x)} Document 2::: Minimal perfect hash function In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. Document 3::: Merkle tree In cryptography and computer science, a hash tree or Merkle tree is a tree in which every "leaf" (node) is labelled with the cryptographic hash of a data block, and every node that is not a leaf (called a branch, inner node, or inode) is labelled with the cryptographic hash of the labels of its child nodes. A hash tree allows efficient and secure verification of the contents of a large data structure. A hash tree is a generalization of a hash list and a hash chain. Demonstrating that a leaf node is a part of a given binary hash tree requires computing a number of hashes proportional to the logarithm of the number of leaf nodes in the tree. Document 4::: Merkle tree Conversely, in a hash list, the number is proportional to the number of leaf nodes itself. A Merkle tree is therefore an efficient example of a cryptographic commitment scheme, in which the root of the tree is seen as a commitment and leaf nodes may be revealed and proven to be part of the original commitment. The concept of a hash tree is named after Ralph Merkle, who patented it in 1979. Document 5::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,}
epfl-collab
$\mathbb{Z}_{37}^*$ denotes ...
['a field.', 'a multiplicative group.', 'a ring.', 'an additive group.']
B
null
Document 1::: Z* theorem In mathematics, George Glauberman's Z* theorem is stated as follows: Z* theorem: Let G be a finite group, with O(G) being its maximal normal subgroup of odd order. If T is a Sylow 2-subgroup of G containing an involution not conjugate in G to any other element of T, then the involution lies in Z*(G), which is the inverse image in G of the center of G/O(G). This generalizes the Brauer–Suzuki theorem (and the proof uses the Brauer–Suzuki theorem to deal with some small cases). Document 2::: Signed-digit representation A Prüfer group is the quotient group Z ( b ∞ ) = Z / Z {\displaystyle \mathbb {Z} (b^{\infty })=\mathbb {Z} /\mathbb {Z} } of the integers and the b {\displaystyle b} -adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star D ∗ {\displaystyle {\mathcal {D}}^{*}} , the set of all finite concatenated strings of digits d 1 … d n {\displaystyle d_{1}\ldots d_{n}} , with n ∈ N {\displaystyle n\in \mathbb {N} } . Each signed-digit representation p ∈ D ∗ {\displaystyle p\in {\mathcal {D}}^{*}} has a valuation v D: D ∗ → Z ( b ∞ ) {\displaystyle v_{\mathcal {D}}:{\mathcal {D}}^{*}\rightarrow \mathbb {Z} (b^{\infty })} v D ( m ) ≡ ∑ i = 1 n f D ( d i ) b − i mod 1 {\displaystyle v_{\mathcal {D}}(m)\equiv \sum _{i=1}^{n}f_{\mathcal {D}}(d_{i})b^{-i}{\bmod {1}}} Document 3::: WISE J0005+3737 WISE J0005+3737, full designation WISE J000517.48+373720.5, is a brown dwarf of spectral class T9, located in constellation Andromeda at approximately 23 light-years from Earth. Document 4::: Zariski space In algebraic geometry, a Zariski space, named for Oscar Zariski, has several different meanings: A topological space that is Noetherian (every open set is quasicompact) A topological space that is Noetherian and also sober (every nonempty closed irreducible subset is the closure of a unique point). The spectrum of any commutative Noetherian ring is a Zariski space in this sense A Zariski–Riemann space of valuations of a field Document 5::: Kepler-37 Kepler-37, also known as UGA-1785, is a G-type main-sequence star located in the constellation Lyra 209 light-years (64 parsecs) from Earth. It is host to exoplanets Kepler-37b, Kepler-37c, Kepler-37d and possibly Kepler-37e, all of which orbit very close to it. Kepler-37 has a mass about 80.3 percent of the Sun's and a radius about 77 percent as large.
epfl-collab
Visual cryptography is a nice visual application of \ldots
['\\ldots ROT13.', '\\ldots the Caesar cipher.', '\\ldots the Vernam cipher.', '\\ldots the Vigen\\`ere cipher.']
C
null
Document 1::: Visual cryptography Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image. One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where an image was broken up into n shares so that only someone with all n shares could decrypt the image, while any n − 1 shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. Document 2::: Visual cryptography When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication.Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations. Document 3::: Geometric cryptography Geometric cryptography is an area of cryptology where messages and ciphertexts are represented by geometric quantities such as angles or intervals and where computations are performed by ruler and compass constructions. The difficulty or impossibility of solving certain geometric problems like trisection of an angle using ruler and compass only is the basis for the various protocols in geometric cryptography. This field of study was suggested by Mike Burmester, Ronald L. Rivest and Adi Shamir in 1996. Though the cryptographic methods based on geometry have practically no real life applications, they are of use as pedagogic tools for the elucidation of other more complex cryptographic protocols. Geometric cryptography may have applications in the future once current mainstream encryption methods are made obsolete by quantum computing. Document 4::: Crypto++ Crypto++ (also known as CryptoPP, libcrypto++, and libcryptopp) is a free and open-source C++ class library of cryptographic algorithms and schemes written by Wei Dai. Crypto++ has been widely used in academia, student projects, open-source, and non-commercial projects, as well as businesses. Released in 1995, the library fully supports 32-bit and 64-bit architectures for many major operating systems and platforms, including Android (using STLport), Apple (macOS and iOS), BSD, Cygwin, IBM AIX, Linux, MinGW, Solaris, Windows, Windows Phone and Windows RT. Document 5::: Non-commutative cryptography Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications.
epfl-collab
Select the \emph{incorrect} statement.
['The order of an element is always multiple of the order of its group.', 'An ideal $I$ of commutative ring $R$ is a subgroup closed under multiplication by all elements of $R$.', 'Any element of order $\\varphi(n)$ is a generator of $\\mathbb{Z}_n^*$.', 'Given a prime $p$, we have $a^{p} = a$ for every $a \\in \\mathbb{Z}_p$.']
A
null
Document 1::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 2::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 3::: Transposition error A transcription error is a specific type of data entry error that is commonly made by human operators or by optical character recognition (OCR) programs. Human transcription errors are commonly the result of typographical mistakes; putting one's fingers in the wrong place while touch typing is the easiest way to make this error. Electronic transcription errors occur when the scan of some printed matter is compromised or in an unusual font – for example, if the paper is crumpled, or the ink is smudged, the OCR may make transcription errors when reading. Document 4::: Ordered field The element − 1 {\displaystyle -1} is not in P . {\displaystyle P.} Document 5::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity.
epfl-collab
Which one of these is a closed set?
['$\\mathbb{Z}$ with the addition.', '$\\mathbb{Z}^\\star$ with the addition.', '$\\mathbb{Z}^\\star$ with the substaction.', '$\\mathbb{Z}-\\{0\\}$ with the division.']
A
null
Document 1::: Closure (mathematics) Let S be a set equipped with one or several methods for producing elements of S from other elements of S. A subset X of S is said to be closed under these methods, if, when all input elements are in X, then all possible results are also in X. Sometimes, one may also say that X has the closure property. The main property of closed sets, which results immediately from the definition, is that every intersection of closed sets is a closed set. It follows that for every subset Y of S, there is a smallest closed subset X of S such that Y ⊆ X {\displaystyle Y\subseteq X} (it is the intersection of all closed subsets that contain Y). Depending on the context, X is called the closure of Y or the set generated or spanned by Y. The concepts of closed sets and closure are often extended to any property of subsets that are stable under intersection; that is, every intersection of subsets that have the property has also the property. For example, in C n , {\displaystyle \mathbb {C} ^{n},} a Zariski-closed set, also known as an algebraic set, is the set of the common zeros of a family of polynomials, and the Zariski closure of a set V of points is the smallest algebraic set that contains V. Document 2::: Closed graph Definition and notation: The graph of a function f: X → Y is the set Gr f := { (x, f(x)): x ∈ X } = { (x, y) ∈ X × Y: y = f(x) }.Notation: If Y is a set then the power set of Y, which is the set of all subsets of Y, is denoted by 2Y or 𝒫(Y).Definition: If X and Y are sets, a set-valued function in Y on X (also called a Y-valued multifunction on X) is a function F: X → 2Y with domain X that is valued in 2Y. That is, F is a function on X such that for every x ∈ X, F(x) is a subset of Y. Some authors call a function F: X → 2Y a set-valued function only if it satisfies the additional requirement that F(x) is not empty for every x ∈ X; this article does not require this.Definition and notation: If F: X → 2Y is a set-valued function in a set Y then the graph of F is the set Gr F := { (x, y) ∈ X × Y: y ∈ F(x) }.Definition: A function f: X → Y can be canonically identified with the set-valued function F: X → 2Y defined by F(x) := { f(x) } for every x ∈ X, where F is called the canonical set-valued function induced by (or associated with) f. Note that in this case, Gr f = Gr F. Document 3::: Closed graph Definition and notation: When we write f: D(f) ⊆ X → Y then we mean that f is a Y-valued function with domain D(f) where D(f) ⊆ X. If we say that f: D(f) ⊆ X → Y is closed (resp. sequentially closed) or has a closed graph (resp. has a sequentially closed graph) then we mean that the graph of f is closed (resp. sequentially closed) in X × Y (rather than in D(f) × Y).When reading literature in functional analysis, if f: X → Y is a linear map between topological vector spaces (TVSs) (e.g. Banach spaces) then "f is closed" will almost always means the following: Definition: A map f: X → Y is called closed if its graph is closed in X × Y. In particular, the term "closed linear operator" will almost certainly refer to a linear map whose graph is closed.Otherwise, especially in literature about point-set topology, "f is closed" may instead mean the following: Definition: A map f: X → Y between topological spaces is called a closed map if the image of a closed subset of X is a closed subset of Y.These two definitions of "closed map" are not equivalent. If it is unclear, then it is recommended that a reader check how "closed map" is defined by the literature they are reading. Document 4::: Closed graph open graph, sequentially closed graph, sequentially open graph) in X × Y if the graph of f, Gr f, is a closed (resp. open, sequentially closed, sequentially open) subset of X × Y when X × Y is endowed with the product topology. If S = X or if X is clear from context then we may omit writing "in X × Y"Observation: If g: S → Y is a function and G is the canonical set-valued function induced by g (i.e. G: S → 2Y is defined by G(s) := { g(s) } for every s ∈ S) then since Gr g = Gr G, g has a closed (resp. sequentially closed, open, sequentially open) graph in X × Y if and only if the same is true of G. Document 5::: Closed graph We give the more general definition of when a Y-valued function or set-valued function defined on a subset S of X has a closed graph since this generality is needed in the study of closed linear operators that are defined on a dense subspace S of a topological vector space X (and not necessarily defined on all of X). This particular case is one of the main reasons why functions with closed graphs are studied in functional analysis. Assumptions: Throughout, X and Y are topological spaces, S ⊆ X, and f is a Y-valued function or set-valued function on S (i.e. f: S → Y or f: S → 2Y). X × Y will always be endowed with the product topology.Definition: We say that f has a closed graph (resp.
epfl-collab
Tick the \textbf{incorrect} assertion.
['ECDSA uses elliptic curves.', 'An ECDSA signature consists in the message and a pair of elements in $\\mathbb{Z}_n$.', 'Subtraction is hard to perform on an elliptic curve.', 'PKCS\\#1v1.5 uses plain RSA as an internal routine.']
C
null
Document 1::: Talk:Fibonacci sequence Well spotted (and sorry for reverting). TheMathCat (talk) 15:43, 15 September 2022 (UTC) (edit conflict) OK, you reverted me just before my self revert. On the table on the right, there is a large vertical space between the first index and the first value (1). Document 2::: Charset detection Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy. This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection. Document 3::: Talk:Fibonacci sequence WP:CITEVAR is very clear that you should not be changing citation styles in this way without consensus. For those of us who use User:BrandonXLF/CitationStyleMarker.js to find inconsistent citation styles, your change is very annoying because it causes all of the citations to be flagged as inconsistent. Also your claim that this is helpful for bots and error checking seems dubious to me. Document 4::: Talk:Fibonacci sequence Setting a specific type is helpful for bots and for error checking. (For instance, if the "journal" parameter is empty it will display an error for "cite journal" but not for "citation". In this case it's not empty; it's just an example.) Document 5::: Assertion (software development) In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception.
epfl-collab
Select the \emph{correct} statement. The Plain RSA Signature scheme
['has modulus $N=p^2$.', 'has a secret modulus $d$ to be selected so that $e+d = 0 \\pmod{\\varphi(N)}$.', 'has public modulus $e$ to be selected so that $\\text{gcd} (e, \\varphi(N)) > 1$.', 'allows us to pick a fixed public key exponent like $e=3$ or $e=2^{16}+1$.']
D
null
Document 1::: Probabilistic signature scheme Probabilistic Signature Scheme (PSS) is a cryptographic signature scheme designed by Mihir Bellare and Phillip Rogaway.RSA-PSS is an adaptation of their work and is standardized as part of PKCS#1 v2.1. In general, RSA-PSS should be used as a replacement for RSA-PKCS#1 v1.5. Document 2::: RSA Cryptosystem RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem, one of the oldest, that is widely used for secure data transmission. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ) (the British signals intelligence agency) by the English mathematician Clifford Cocks. That system was declassified in 1997.In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private). Document 3::: Digital Signature Standard The Digital Signature Standard (DSS) is a Federal Information Processing Standard specifying a suite of algorithms that can be used to generate digital signatures established by the U.S. National Institute of Standards and Technology (NIST) in 1994. Four revisions to the initial specification have been released: FIPS 186-1 in 1996, FIPS 186-2 in 2000, FIPS 186-3 in 2009, and FIPS 186-4 in 2013.It defines the Digital Signature Algorithm, contains a definition of RSA signatures based on the definitions contained within PKCS #1 version 2.1 and in American National Standard X9.31 with some additional requirements, and contains a definition of the Elliptic Curve Digital Signature Algorithm based on the definition provided by American National Standard X9.62 with some additional requirements and some recommended elliptic curves. It also approves the use of all three algorithms. == References == Document 4::: Cryptographic Message Syntax The Cryptographic Message Syntax (CMS) is the IETF's standard for cryptographically protected messages. It can be used by cryptographic schemes and protocols to digitally sign, digest, authenticate or encrypt any form of digital data. CMS is based on the syntax of PKCS #7, which in turn is based on the Privacy-Enhanced Mail standard. The newest version of CMS (as of 2009) is specified in RFC 5652 (but see also RFC 5911 for updated ASN.1 modules conforming to ASN.1 2002). Document 5::: RSA Cryptosystem An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers.The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem".
epfl-collab
Which of the following is an element of $\mathbb{Z}_{60}^*$?
['49', '30', '26', '21']
A
null
Document 1::: Signed-digit representation A Prüfer group is the quotient group Z ( b ∞ ) = Z / Z {\displaystyle \mathbb {Z} (b^{\infty })=\mathbb {Z} /\mathbb {Z} } of the integers and the b {\displaystyle b} -adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star D ∗ {\displaystyle {\mathcal {D}}^{*}} , the set of all finite concatenated strings of digits d 1 … d n {\displaystyle d_{1}\ldots d_{n}} , with n ∈ N {\displaystyle n\in \mathbb {N} } . Each signed-digit representation p ∈ D ∗ {\displaystyle p\in {\mathcal {D}}^{*}} has a valuation v D: D ∗ → Z ( b ∞ ) {\displaystyle v_{\mathcal {D}}:{\mathcal {D}}^{*}\rightarrow \mathbb {Z} (b^{\infty })} v D ( m ) ≡ ∑ i = 1 n f D ( d i ) b − i mod 1 {\displaystyle v_{\mathcal {D}}(m)\equiv \sum _{i=1}^{n}f_{\mathcal {D}}(d_{i})b^{-i}{\bmod {1}}} Document 2::: Sylow system A Hall divisor (also called a unitary divisor) of an integer n is a divisor d of n such that d and n/d are coprime. The easiest way to find the Hall divisors is to write the prime power factorization of the number in question and take any subset of the factors. For example, to find the Hall divisors of 60, its prime power factorization is 22 × 3 × 5, so one takes any product of 3, 22 = 4, and 5. Thus, the Hall divisors of 60 are 1, 3, 4, 5, 12, 15, 20, and 60. A Hall subgroup of G is a subgroup whose order is a Hall divisor of the order of G. In other words, it is a subgroup whose order is coprime to its index. If π is a set of primes, then a Hall π-subgroup is a subgroup whose order is a product of primes in π, and whose index is not divisible by any primes in π. Document 3::: Ring of all algebraic integers In algebraic number theory, an algebraic integer is a complex number which is integral over the integers. That is, an algebraic integer is a complex root of some monic polynomial (a polynomial whose leading coefficient is 1) whose coefficients are integers. The set of all algebraic integers A is closed under addition, subtraction and multiplication and therefore is a commutative subring of the complex numbers. The ring of integers of a number field K, denoted by OK, is the intersection of K and A: it can also be characterised as the maximal order of the field K. Each algebraic integer belongs to the ring of integers of some number field. A number α is an algebraic integer if and only if the ring Z {\displaystyle \mathbb {Z} } is finitely generated as an abelian group, which is to say, as a Z {\displaystyle \mathbb {Z} } -module. Document 4::: Klein configuration The 60 points are three concurrent lines forming an odd permutation, shown below. The sixty planes are 3 coplanar lines forming even permutations, obtained by reversing the last two digits in the points. For any point or plane there are 15 members in the other set containing those 3 lines. Document 5::: Z* theorem In mathematics, George Glauberman's Z* theorem is stated as follows: Z* theorem: Let G be a finite group, with O(G) being its maximal normal subgroup of odd order. If T is a Sylow 2-subgroup of G containing an involution not conjugate in G to any other element of T, then the involution lies in Z*(G), which is the inverse image in G of the center of G/O(G). This generalizes the Brauer–Suzuki theorem (and the proof uses the Brauer–Suzuki theorem to deal with some small cases).
epfl-collab
Which of the following algorithms is \emph{not} a hash function?
['SHA-1', 'MD4', 'RC4', 'MD5']
C
null
Document 1::: Secure Hash Algorithms The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including: SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1. SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. Document 2::: Hashing algorithm A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing. Document 3::: Hashing algorithm Use of hash functions relies on statistical properties of key and function interaction: worst-case behaviour is intolerably bad but rare, and average-case behaviour can be nearly optimal (minimal collision). : 527 Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms of data integrity. Document 4::: Hashing algorithm Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally and storage space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often exponential storage requirements of direct access of state spaces of large or variable-length keys. Document 5::: Minimal perfect hash function In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented.
epfl-collab
Select the \emph{correct} answer.
['The dictionary attack needs no precomputation.', 'The dictionary attack has a memory complexity of order 1.', 'The success probability of the dictionary attack depends on the size of the dictionary.', 'The multi-target dictionary attack needs no precomputation.']
C
null
Document 1::: Multiple choice questions Multiple choice (MC), objective response or MCQ (for multiple choice question) is a form of an objective assessment in which respondents are asked to select only correct answers from the choices offered as a list. The multiple choice format is most frequently used in educational testing, in market research, and in elections, when a person chooses between multiple candidates, parties, or policies. Although E. L. Thorndike developed an early scientific approach to testing students, it was his assistant Benjamin D. Wood who developed the multiple-choice test. Document 2::: Programmed learning After each step, learners are given a question to test their comprehension. Then immediately the correct answer is shown. Document 3::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 4::: Precision Questioning Precision questioning (PQ), an intellectual toolkit for critical thinking and for problem solving, grew out of a collaboration between Dennis Matthies (1946- ) and Dr. Monica Worline, while both taught/studied at Stanford University. Precision questioning seeks to enable its practitioners with a highly structured, one-question/one-answer discussion format to help them: solve complex problems conduct deep analysis make difficult decisionsPQ focuses on clearly expressing gaps in thinking by coupling a taxonomy of analytical questions with a structured call-and-response model to enable PQ practitioners to uncover weaknesses in thinking and to raise the intellectual level of a conversation. Those who use precision questioning (also called "PQers") describe PQ conversations as those analytical opportunities motivated by an attempt to get to precise answers, or to identify where no answer is available. However, when "drilling" into a topic, practitioners endeavor to avoid the use of personalization (blame or shame). Document 5::: Decision variant A decision problem is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is yes.These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages. Using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the algorithm of a decision problem is to compute the characteristic function of a subset of the natural numbers.
epfl-collab
Tick the \emph{false} assertion. Given a ring $R$, $R^\star$ is\ldots
['the set of units.', '$R-\\{0\\}$.', 'a group.', 'the set of invertible elements in $R$.']
B
null
Document 1::: Boolean rings In mathematics, a Boolean ring R is a ring for which x2 = x for all x in R, that is, a ring that consists only of idempotent elements. An example is the ring of integers modulo 2. Every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨, which would constitute a semiring). Conversely, every Boolean algebra gives rise to a Boolean ring. Boolean rings are named after the founder of Boolean algebra, George Boole. Document 2::: Unit ring A ring is a set R equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms R is an abelian group under addition, meaning that: (a + b) + c = a + (b + c) for all a, b, c in R (that is, + is associative). a + b = b + a for all a, b in R (that is, + is commutative). There is an element 0 in R such that a + 0 = a for all a in R (that is, 0 is the additive identity). For each a in R there exists −a in R such that a + (−a) = 0 (that is, −a is the additive inverse of a). Document 3::: Rng (algebra) In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng (IPA: ) is meant to suggest that it is a ring without i, that is, without the requirement for an identity element.There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see Ring (mathematics) § History). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity. A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space. Document 4::: Rng homomorphism From the standpoint of ring theory, isomorphic rings cannot be distinguished. If R and S are rngs, then the corresponding notion is that of a rng homomorphism, defined as above except without the third condition f(1R) = 1S. A rng homomorphism between (unital) rings need not be a ring homomorphism. Document 5::: Ring extension A subring of a ring (R, +, ∗, 0, 1) is a subset S of R that preserves the structure of the ring, i.e. a ring (S, +, ∗, 0, 1) with S ⊆ R. Equivalently, it is both a subgroup of (R, +, 0) and a submonoid of (R, ∗, 1).
epfl-collab
Select the \emph{incorrect} statement. Bluetooth is
['a short-range wireless technology.', 'able to transmit 1Mbit/sec in 10m distance.', 'a standard for RFID tags.', 'designed both for data and voice transmission.']
C
null
Document 1::: Bluetooth Basic Rate/Enhanced Data Rate Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to 10 metres (33 ft). It employs UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz. Document 2::: Bluetooth Basic Rate/Enhanced Data Rate It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. Document 3::: Transposition error A transcription error is a specific type of data entry error that is commonly made by human operators or by optical character recognition (OCR) programs. Human transcription errors are commonly the result of typographical mistakes; putting one's fingers in the wrong place while touch typing is the easiest way to make this error. Electronic transcription errors occur when the scan of some printed matter is compromised or in an unusual font – for example, if the paper is crumpled, or the ink is smudged, the OCR may make transcription errors when reading. Document 4::: IBeacon iBeacon is a protocol developed by Apple and introduced at the Apple Worldwide Developers Conference in 2013. Various vendors have since made iBeacon-compatible hardware transmitters – typically called beacons – a class of Bluetooth Low Energy (BLE) devices that broadcast their identifier to nearby portable electronic devices. The technology enables smartphones, tablets and other devices to perform actions when in proximity to an iBeacon.iBeacon is based on Bluetooth low energy proximity sensing by transmitting a universally unique identifier picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification. Document 5::: Bluetooth Basic Rate/Enhanced Data Rate The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. As of 2021, 4.7 billion Bluetooth integrated circuit chips are shipped annually.
epfl-collab
Which cipher is AES?
['RC5', 'SAFER', 'BLOWFISH', 'RIJNDAEL']
D
null
Document 1::: Cipher In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Document 2::: UES (cipher) In cryptography, UES (Universal Encryption Standard) is a block cipher designed in 1999 by Helena Handschuh and Serge Vaudenay. They proposed it as a transitional step, to prepare for the completion of the AES process. UES was designed with the same interface as AES: a block size of 128 bits and key size of 128, 192, or 256 bits. It consists of two parallel Triple DES encryptions on the halves of the block, with key whitening and key-dependent swapping of bits between the halves. The key schedule is taken from DEAL. Document 3::: SSS (cipher) In cryptography, SSS is a stream cypher algorithm developed by Gregory Rose, Philip Hawkes, Michael Paddon, and Miriam Wiggers de Vries. It includes a message authentication code feature. It has been submitted to the eSTREAM Project of the eCRYPT network. It has not selected for focus nor for consideration during Phase 2; it has been 'archived'. == References == Document 4::: Block cipher In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Document 5::: LEX (cipher) LEX is a stream cipher based on the round transformation of AES. LEX provides the same key agility and short message block performance as AES while handling longer messages faster than AES. In addition, it has the same hardware and software flexibility as AES, and hardware implementations of LEX can share resources with AES implementations.
epfl-collab
Which of the following algorithms is a stream cipher?
['FOX', 'IDEA', 'RC4', 'AES']
C
null
Document 1::: Cipher Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). Document 2::: SSS (cipher) In cryptography, SSS is a stream cypher algorithm developed by Gregory Rose, Philip Hawkes, Michael Paddon, and Miriam Wiggers de Vries. It includes a message authentication code feature. It has been submitted to the eSTREAM Project of the eCRYPT network. It has not selected for focus nor for consideration during Phase 2; it has been 'archived'. == References == Document 3::: Cryptosystem In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Document 4::: ABC (cipher) In cryptography, ABC is a stream cypher algorithm developed by Vladimir Anashin, Andrey Bogdanov, Ilya Kizhvatov, and Sandeep Kumar. It has been submitted to the eSTREAM Project of the eCRYPT network. == References == Document 5::: QUAD (cipher) In cryptography, the QUAD cipher is a stream cipher which was designed with provable security arguments in mind.
epfl-collab
Consider a public key cryptosystem. The channel used to transmit the public key has to be\dots
['\\dots authenticated.', '\\dots authenticated and confidential.', '\\dots encrypted.', '\\dots confidential.']
A
null
Document 1::: Threshold cryptosystem A threshold cryptosystem, the basis for the field of threshold cryptography, is a cryptosystem that protects information by encrypting it and distributing it among a cluster of fault-tolerant computers. The message is encrypted using a public key, and the corresponding private key is shared among the participating parties. With a threshold cryptosystem, in order to decrypt an encrypted message or to sign a message, several parties (more than some threshold number) must cooperate in the decryption or signature protocol. Document 2::: Key distribution In symmetric key cryptography, both parties must possess a secret key which they must exchange prior to using any encryption. Distribution of secret keys has been problematic until recently, because it involved face-to-face meeting, use of a trusted courier, or sending the key through an existing encryption channel. The first two are often impractical and always unsafe, while the third depends on the security of a previous key exchange. In public key cryptography, the key distribution of public keys is done through public key servers. Document 3::: Asymmetric Algorithms Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext. Document 4::: Key distribution When a person creates a key-pair, they keep one key private and the other, known as the public-key, is uploaded to a server where it can be accessed by anyone to send the user a private, encrypted, message. Secure Sockets Layer (SSL) uses Diffie–Hellman key exchange if the client does not have a public-private key pair and a published certificate in the public key infrastructure, and Public Key Cryptography if the user does have both the keys and the credential. Key distribution is an important issue in wireless sensor network (WSN) design. Document 5::: Binary symmetric channel A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of p, and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.
epfl-collab
KEM/DEM refers to\dots
['a hash function.', 'an encryption scheme.', 'a digital signature scheme.', 'a commitment scheme.']
B
null
Document 1::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 2::: Digital elevation models A digital elevation model (DEM) or digital surface model (DSM) is a 3D computer graphics representation of elevation data to represent terrain or overlaying objects, commonly of a planet, moon, or asteroid. A "global DEM" refers to a discrete global grid. DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally produced relief maps. A digital terrain model (DTM) represents specifically the ground surface while DEM and DSM may represent tree top canopy or building roofs. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is often required for flood or drainage modeling, land-use studies, geological applications, and other applications, and in planetary science. Document 3::: Dot planimeter A dot planimeter is a device used in planimetrics for estimating the area of a shape, consisting of a transparent sheet containing a square grid of dots. To estimate the area of a shape, the sheet is overlaid on the shape and the dots within the shape are counted. The estimate of area is the number of dots counted multiplied by the area of a single grid square. Document 4::: Dot plot (statistics) A dot chart or dot plot is a statistical chart consisting of data points plotted on a fairly simple scale, typically using filled in circles. There are two common, yet very different, versions of the dot chart. The first has been used in hand-drawn (pre-computer era) graphs to depict distributions going back to 1884. The other version is described by William S. Cleveland as an alternative to the bar chart, in which dots are used to depict the quantitative values (e.g. counts) associated with categorical variables. Document 5::: Discrete Element Method A discrete element method (DEM), also called a distinct element method, is any of a family of numerical methods for computing the motion and effect of a large number of small particles. Though DEM is very closely related to molecular dynamics, the method is generally distinguished by its inclusion of rotational degrees-of-freedom as well as stateful contact and often complicated geometries (including polyhedra). With advances in computing power and numerical algorithms for nearest neighbor sorting, it has become possible to numerically simulate millions of particles on a single processor.
epfl-collab
Tick the \textbf{false} statement.
['The cardinality of $E_{a,b}(\\mathsf{GF}(q))$ is bounded by $q+1+2\\sqrt{q}$.', 'In $(\\mathsf{GF}(2^k))$, we have $\\mathsf{Tr}(a+b)=\\mathsf{Tr}(a)+\\mathsf{Tr}(b)$.', 'Two Elliptic curves cannot have the same $j$-invariant.', '$E_{a,b}$ is non-singular if $4a^3+27b^2 \\neq 0$ over a finite field of characteristic $p>3$.']
C
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: Boolean expression In computer science, a Boolean expression is an expression used in programming languages that produces a Boolean value when evaluated. A Boolean value is either true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits. Document 3::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values. Document 4::: Boolean domain The initial object in the category of bounded lattices is a Boolean domain. In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true. However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for example, falsity is represented by the number 0 and truth is represented by the number 1 or −1, and all variables that can take these values can also take any other numerical values. Document 5::: Boolean-valued function A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f: X → B, where X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information. In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the corresponding semiotic sign or syntactic expression. In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a final truth value.
epfl-collab
Select \emph{incorrect} statement. The brute force technique against a cipher with key $256$ bits is
['impossible since the number of possible keys is too high $2^{256} \\approx 10^{77}$.', 'feasible using all clusters at EPFL.', "impossible in future even if we consider Moore's law.", 'impossible even if we can compute without burning an energy.']
B
null
Document 1::: Brute force attack In cryptography, a brute-force attack consists of an attacker submitting many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search. Document 2::: Brute force attack A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier. When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Document 3::: Brute force attack Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones.Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack, with 'anti-hammering' for countermeasures. Document 4::: M6 (cipher) Mod 257, information about the secret key itself is revealed. One known plaintext reduces the complexity of a brute force attack to about 235 trial encryptions; "a few dozen" known plaintexts lowers this number to about 231. Due to its simple key schedule, M6 is also vulnerable to a slide attack, which requires more known plaintext but less computation. == References == Document 5::: Benaloh cryptosystem To decrypt a ciphertext c ∈ Z n ∗ {\displaystyle c\in \mathbb {Z} _{n}^{*}}: Compute a = c ϕ / r mod n {\displaystyle a=c^{\phi /r}\mod n} Output m = log x ⁡ ( a ) {\displaystyle m=\log _{x}(a)} , i.e., find m such that x m ≡ a mod n {\displaystyle x^{m}\equiv a\mod n} To understand decryption, first notice that for any m ∈ Z r {\displaystyle m\in \mathbb {Z} _{r}} and u ∈ Z n ∗ {\displaystyle u\in \mathbb {Z} _{n}^{*}} we have: a = ( c ) ϕ / r ≡ ( y m u r ) ϕ / r ≡ ( y m ) ϕ / r ( u r ) ϕ / r ≡ ( y ϕ / r ) m ( u ) ϕ ≡ ( x ) m ( u ) 0 ≡ x m mod n {\displaystyle a=(c)^{\phi /r}\equiv (y^{m}u^{r})^{\phi /r}\equiv (y^{m})^{\phi /r}(u^{r})^{\phi /r}\equiv (y^{\phi /r})^{m}(u)^{\phi }\equiv (x)^{m}(u)^{0}\equiv x^{m}\mod n} To recover m from a, we take the discrete log of a base x. If r is small, we can recover m by an exhaustive search, i.e. checking if x i ≡ a mod n {\displaystyle x^{i}\equiv a\mod n} for all 0 … ( r − 1 ) {\displaystyle 0\dots (r-1)} . For larger values of r, the Baby-step giant-step algorithm can be used to recover m in O ( r ) {\displaystyle O({\sqrt {r}})} time and space.
epfl-collab
Select the \emph{weakest} algorithm.
['A5/1', 'A5/4', 'A5/3', 'A5/2']
D
null
Document 1::: Selection problem In computer science, a selection algorithm is an algorithm for finding the k {\displaystyle k} th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the k {\displaystyle k} th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Document 2::: Algorithm selection Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms. Document 3::: Algorithm selection Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms. Document 4::: Floyd–Rivest algorithm In computer science, the Floyd-Rivest algorithm is a selection algorithm developed by Robert W. Floyd and Ronald L. Rivest that has an optimal expected number of comparisons within lower-order terms. It is functionally equivalent to quickselect, but runs faster in practice on average. It has an expected running time of O(n) and an expected number of comparisons of n + min(k, n − k) + O(n1/2 log1/2 n). The algorithm was originally presented in a Stanford University technical report containing two papers, where it was referred to as SELECT and paired with PICK, or median of medians. It was subsequently published in Communications of the ACM, Volume 18: Issue 3. Document 5::: Minimum degree algorithm {\displaystyle \left(\mathbf {P} ^{T}\mathbf {A} \mathbf {P} \right)\left(\mathbf {P} ^{T}\mathbf {x} \right)=\mathbf {P} ^{T}\mathbf {b} .} The problem of finding the best ordering is an NP-complete problem and is thus intractable, so heuristic methods are used instead.
epfl-collab
Tick the \textit{incorrect} assertion.
['HMAC is a message authentication code based on a hash function.', 'GCM is a blockcipher mode of operation that provides both confidentiality and authenticity for messages.', 'Plain CBCMAC resists forgery attacks.', 'A message authentication scheme that resists a chosen message forgery attack will also resist a known message forgery attack.']
C
null
Document 1::: False position In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations. Document 2::: Test assertion In computer software testing, a test assertion is an expression which encapsulates some testable logic specified about a target under test. The expression is formally presented as an assertion, along with some form of identifier, to help testers and engineers ensure that tests of the target relate properly and clearly to the corresponding specified statements about the target. Usually the logic for each test assertion is limited to one single aspect specified. A test assertion may include prerequisites which must be true for the test assertion to be valid. Document 3::: Talk:Fibonacci sequence Well spotted (and sorry for reverting). TheMathCat (talk) 15:43, 15 September 2022 (UTC) (edit conflict) OK, you reverted me just before my self revert. On the table on the right, there is a large vertical space between the first index and the first value (1). Document 4::: Assertion (software development) In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Document 5::: Charset detection Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy. This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection.
epfl-collab
The Moore law
['has no relevance for cryptography since it only considers speed of computation', 'says that CPU speed doubles every 18 months', 'states that anything that can go wrong will', 'implies the key size is doubled every every 18 months to preserve confidentiality']
B
null
Document 1::: Computational power Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter), who in 1965 posited a doubling every year in the number of components per integrated circuit, and projected this rate of growth would continue for at least another decade. Document 2::: Computational power Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022 Nvidia CEO Jensen Huang considered Moore's law dead, while Intel CEO Pat Gelsinger was of the opposite view. Document 3::: Moore's second law Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years. As of 2015, the price had already reached about 14 billion US dollars.Rock's law can be seen as the economic flip side to Moore's (first) law – that the number of transistors in a dense integrated circuit doubles every two years. The latter is a direct consequence of the ongoing growth of the capital-intensive semiconductor industry— innovative and popular products mean more profits, meaning more capital available to invest in ever higher levels of large-scale integration, which in turn leads to the creation of even more innovative products.The semiconductor industry has always been extremely capital-intensive, with ever-dropping manufacturing unit costs. Thus, the ultimate limits to growth of the industry will constrain the maximum amount of capital that can be invested in new products; at some point, Rock's Law will collide with Moore's Law.It has been suggested that fabrication plant costs have not increased as quickly as predicted by Rock's law – indeed plateauing in the late 1990s – and also that the fabrication plant cost per transistor (which has shown a pronounced downward trend) may be more relevant as a constraint on Moore's Law. Document 4::: Computational power Advancements in digital electronics, such as the reduction in quality-adjusted microprocessor prices, the increase in memory capacity (RAM and flash), the improvement of sensors, and even the number and size of pixels in digital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change, productivity, and economic growth. Document 5::: Computational power In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%. While Moore did not use empirical evidence in forecasting that the historical trend would continue, his prediction held since 1975 and has since become known as a "law". Moore's prediction has been used in the semiconductor industry to guide long-term planning and to set targets for research and development, thus functioning to some extent as a self-fulfilling prophecy.
epfl-collab
Select the \emph{incorrect} statement. The Bluetooth project aims for
['low security.', 'low power.', 'low complexity.', 'low cost.']
A
null
Document 1::: Bluetooth Basic Rate/Enhanced Data Rate It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. Document 2::: Bluetooth Basic Rate/Enhanced Data Rate Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to 10 metres (33 ft). It employs UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz. Document 3::: IBeacon iBeacon is a protocol developed by Apple and introduced at the Apple Worldwide Developers Conference in 2013. Various vendors have since made iBeacon-compatible hardware transmitters – typically called beacons – a class of Bluetooth Low Energy (BLE) devices that broadcast their identifier to nearby portable electronic devices. The technology enables smartphones, tablets and other devices to perform actions when in proximity to an iBeacon.iBeacon is based on Bluetooth low energy proximity sensing by transmitting a universally unique identifier picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification. Document 4::: Bluetooth Basic Rate/Enhanced Data Rate The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. As of 2021, 4.7 billion Bluetooth integrated circuit chips are shipped annually. Document 5::: BadBIOS BadBIOS is alleged malware described by network security researcher Dragos Ruiu in October 2013 with the ability to communicate between instances of itself across air gaps using ultrasonic communication between a computer's speakers and microphone. To date, there have been no proven occurrences of this malware. Ruiu says that the malware is able to infect the BIOS of computers running Windows, Mac OS X, BSD and Linux as well as spread infection over USB flash drives. Rob Graham of Errata Security produced a detailed analysis of each element of the descriptions of BadBIOS's capabilities, describing the software as "plausible", whereas Paul Ducklin on the Sophos Naked Security blog suggested "It's possible, of course, that this is an elaborate hoax". After Ruiu posted data dumps which supposedly demonstrated the existence of the virus, "all signs of maliciousness were found to be normal and expected data".In December 2013 computer scientists Michael Hanspach and Michael Goetz released a paper to the Journal of Communication demonstrating the possibility of an acoustic mesh networking at a slow 20 bits per second using a set of speakers and microphones for ultrasonic communication in a fashion similar to BadBIOS's described abilities.
epfl-collab
Tick the \emph{false} assertion. The ambiguity issue in the decryption algorithm of the Rabin cryptosystem can be solved by\dots
['encrypting the message twice.', 'ensuring that the other possible plaintexts make no sense.', 'appending some integrity checks to the message before encryption.', 'encrypting the message appended to itself.']
A
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: Rabin fingerprint The Rabin fingerprinting scheme is a method for implementing fingerprints using polynomials over a finite field. It was proposed by Michael O. Rabin. Document 3::: ISAAC (cipher) ISAAC (indirection, shift, accumulate, add, and count) is a cryptographically secure pseudorandom number generator and a stream cipher designed by Robert J. Jenkins Jr. in 1993. The reference implementation source code was dedicated to the public domain. "I developed (...) tests to break a generator, and I developed the generator to pass the tests. The generator is ISAAC." Document 4::: Rabin–Karp string search algorithm In computer science, the Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses hashing to find an exact match of a pattern string in a text. It uses a rolling hash to quickly filter out positions of the text that cannot match the pattern, and then checks for a match at the remaining positions. Generalizations of the same idea can be used to find more than one match of a single pattern, or to find matches for more than one pattern. To find a single match of a single pattern, the expected time of the algorithm is linear in the combined length of the pattern and text, although its worst-case time complexity is the product of the two lengths. Document 5::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values.
epfl-collab
What is the order of $2^{124}$ in $(\mathbb{Z}_{2^{128}},+)$?
['$\\varphi(2^{128})$.', '8.', '16.', '124.']
C
null
Document 1::: Multiplicative order In number theory, given a positive integer n and an integer a coprime to n, the multiplicative order of a modulo n is the smallest positive integer k such that a k ≡ 1 ( mod n ) {\textstyle a^{k}\ \equiv \ 1{\pmod {n}}} .In other words, the multiplicative order of a modulo n is the order of a in the multiplicative group of the units in the ring of the integers modulo n. The order of a modulo n is sometimes written as ord n ⁡ ( a ) {\displaystyle \operatorname {ord} _{n}(a)} . Document 2::: Divisible by 4 For an integer n, the 2-order of n (also called valuation) is the largest natural number ν such that 2ν divides n. This definition applies to positive and negative numbers n, although some authors restrict it to positive n; and one may define the 2-order of 0 to be infinity (see also parity of zero). The 2-order of n is written ν2(n) or ord2(n). It is not to be confused with the multiplicative order modulo 2. Document 3::: Multiplicative group of integers modulo n In modular arithmetic, the integers coprime (relatively prime) to n from the set { 0 , 1 , … , n − 1 } {\displaystyle \{0,1,\dots ,n-1\}} of n non-negative integers form a group under multiplication modulo n, called the multiplicative group of integers modulo n. Equivalently, the elements of this group can be thought of as the congruence classes, also known as residues modulo n, that are coprime to n. Hence another name is the group of primitive residue classes modulo n. In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo n. Here units refers to elements with a multiplicative inverse, which, in this ring, are exactly those coprime to n. This quotient group, usually denoted ( Z / n Z ) × {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }} , is fundamental in number theory. It is used in cryptography, integer factorization, and primality testing. It is an abelian, finite group whose order is given by Euler's totient function: | ( Z / n Z ) × | = φ ( n ) . {\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).} For prime n the group is cyclic, and in general the structure is easy to describe, but no simple general formula for finding generators is known. Document 4::: Cyclic number (group theory) Let n = p1 p2 … pk where the pi are distinct primes, then φ(n) = (p1 − 1)(p2 − 1)...(pk – 1). If no pi divides any (pj – 1), then n and φ(n) have no common (prime) divisor, and n is cyclic. The first cyclic numbers are 1, 2, 3, 5, 7, 11, 13, 15, 17, 19, 23, 29, 31, 33, 35, 37, 41, 43, 47, 51, 53, 59, 61, 65, 67, 69, 71, 73, 77, 79, 83, 85, 87, 89, 91, 95, 97, 101, 103, 107, 109, 113, 115, 119, 123, 127, 131, 133, 137, 139, 141, 143, 145, 149, ... (sequence A003277 in the OEIS). == References == Document 5::: Binary tetrahedral group In mathematics, the binary tetrahedral group, denoted 2T or ⟨2,3,3⟩, is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C.
epfl-collab
Which problem in communication is \emph{not} treated by cryptography?
['confidentiality', 'authenthication', 'integrity', 'data transmission']
D
null
Document 1::: Non-commutative cryptography Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications. Document 2::: Non-commutative cryptography In contrast to non-commutative cryptography, the currently widely used public-key cryptosystems like RSA cryptosystem, Diffie–Hellman key exchange and elliptic curve cryptography are based on number theory and hence depend on commutative algebraic structures. Non-commutative cryptographic protocols have been developed for solving various cryptographic problems like key exchange, encryption-decryption, and authentication. These protocols are very similar to the corresponding protocols in the commutative case. Document 3::: Diffie–Hellman problem The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken. Document 4::: Asymmetric Algorithms Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext. Document 5::: Cryptographic algorithm Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required.
epfl-collab
What are the complexities for the single-target dictionary attacks, when there are $N$ keys?
['Preprocessing: $N$, Memory: $1$, Time: $N$', 'Preprocessing: $0$, Memory: $1$, Time: $\\sqrt{N}$', 'Preprocessing: $1$, Memory: $N$, Time: $N$', 'Preprocessing: $N$, Memory: $N$, Time: 1']
D
null
Document 1::: Correlation attack Third order correlations and higher can be defined in this way. Higher-order correlation attacks can be more powerful than single-order correlation attacks, however, this effect is subject to a "law of limiting returns". The table below shows a measure of the computational cost for various attacks on a keystream generator consisting of eight 8-bit LFSRs combined by a single Boolean function. Understanding the calculation of cost is relatively straightforward: the leftmost term of the sum represents the size of the key space for the correlated generators, and the rightmost term represents the size of the key space for the remaining generators. While higher-order correlations lead to more powerful attacks, they are also more difficult to find, as the space of available Boolean functions to correlate against the generator output increases as the number of arguments to the function does. Document 2::: Known-key distinguishing attack In cryptography, a known-key distinguishing attack is an attack model against symmetric ciphers, whereby an attacker who knows the key can find a structural property in cipher, where the transformation from plaintext to ciphertext is not random. There is no common formal definition for what such a transformation may be. The chosen-key distinguishing attack is strongly related, where the attacker can choose a key to introduce such transformations.These attacks do not directly compromise the confidentiality of ciphers, because in a classical scenario, the key is unknown to the attacker. Document 3::: Brute force attack A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier. When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Document 4::: Correlation attack Correlation attacks are a class of cryptographic known-plaintext attacks for breaking stream ciphers whose keystreams are generated by combining the output of several linear-feedback shift registers (LFSRs) using a Boolean function. Correlation attacks exploit a statistical weakness that arises from the specific Boolean function chosen for the keystream. While some Boolean functions are vulnerable to correlation attacks, stream ciphers generated using such functions are not inherently insecure. Document 5::: Collision attack In cryptography, a collision attack on a cryptographic hash tries to find two inputs producing the same hash value, i.e. a hash collision. This is in contrast to a preimage attack where a specific target hash value is specified. There are roughly two types of collision attacks: Classical collision attack Find two different messages m1 and m2 such that hash(m1) = hash(m2).More generally: Chosen-prefix collision attack Given two different prefixes p1 and p2, find two appendages m1 and m2 such that hash(p1 ∥ m1) = hash(p2 ∥ m2), where ∥ denotes the concatenation operation.
epfl-collab
Tick the \emph{incorrect} assertion. The Diffie-Hellman key agreement protocol \ldots
['is easy to break when working on the group $\\mathbf{Z}_{n}$.', 'requires the hardness of the Discrete Logarithm problem.', 'uses ElGamal encryption in order to establish the key.', 'allows two participants to set up a key so that they can communicate securely.']
C
null
Document 1::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Document 2::: Computational Diffie–Hellman assumption The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key. Document 3::: Diffie–Hellman problem The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken. Document 4::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 5::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,}
epfl-collab
Which of these components was not part of the Enigma machine?
['A pseudo-random number generator', 'A plugboard with a wire connection', 'A Rotor', 'A reflector']
A
null
Document 1::: Rotor cipher machine In cryptography, a rotor machine is an electro-mechanical stream cipher device used for encrypting and decrypting messages. Rotor machines were the cryptographic state-of-the-art for much of the 20th century; they were in widespread use in the 1920s–1970s. The most famous example is the German Enigma machine, the output of which was deciphered by the Allies during World War II, producing intelligence code-named Ultra. Document 2::: Crypt (Unix) In Unix computing, crypt or enigma is a utility program used for encryption. Due to the ease of breaking it, it is considered to be obsolete. The program is usually used as a filter, and it has traditionally been implemented using a "rotor machine" algorithm based on the Enigma machine. It is considered to be cryptographically far too weak to provide any security against brute-force attacks by modern, commodity personal computers.Some versions of Unix shipped with an even weaker version of the crypt(1) command in order to comply with contemporaneous laws and regulations that limited the exportation of cryptographic software. Some of these were simply implementations of the Caesar cipher (effectively no more secure than ROT13, which is implemented as a Caesar cipher with a well-known key). Document 3::: Cryptanalytic computer A cryptanalytic computer is a computer designed to be used for cryptanalysis, which nowadays involves massive statistical analysis and multiple trial decryptions that since before World War II are possible only with automated equipment. Polish cryptanalysts designed and built automated aids in their work on Enigma traffic. Arguably, the first modern computer (digital, electronic, and somewhat programmable) was built for cryptanalytic work at Bletchley Park (the Colossus) during the war. More modern computers were important after World War II, and some machines (like the Cray-1) are reported to have had machine instructions hardwired in at the request of NSA. Document 4::: Unit record equipment Initially all machines were manual or electromechanical. The first use of an electronic component was in 1937 when a photocell was used in a Social Security bill-feed machine. Document 5::: CS-Cipher In cryptography, CS-Cipher (for Chiffrement Symétrique) is a block cipher invented by Jacques Stern and Serge Vaudenay in 1998. It was submitted to the NESSIE project, but was not selected. The algorithm uses a key length between 0 and 128 bits (length must be a multiple of 8 bits). By default, the cipher uses 128 bits. It operates on blocks of 64 bits using an 8-round Feistel network and is optimized for 8-bit processors. The round function is based on the fast Fourier transform and uses the binary expansion of e as a source of "nothing up my sleeve numbers".
epfl-collab
Consider password-based access control. Tick the \textit{incorrect} assertion.
['Increasing the delay between authentication attempts can protect from online attacks.', 'Double hashing the password can help avoid the problems related to low-entropy passwords.', 'Blocking the access after 10 unsuccessful authentication attempts can protect from online attacks.', 'Salt can be used to thwart multi-target attacks.']
B
null
Document 1::: Logical access control In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used. Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems. Document 2::: Logical access control On swiping the card into a card reader and entering the correct PIN code. Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats. Document 3::: Logical access control Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level. The particular logical access controls used in a given facility and hardware infrastructure partially depend on the nature of the entity that owns and administrates the hardware setup. Document 4::: Logical access control The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. Document 5::: Security control Security controls are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets. In the field of information security, such controls protect the confidentiality, integrity and availability of information. Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency.
epfl-collab
Select the \emph{incorrect} statement. In ElGamal signature
['the public key is $K_p = y = g^x$, where $x$ is the secret key.', 'public parameters are a prime number $p$ and a generator $g$ of $\\mathbb{Z}_p^*$.', 'requires a secure channel to transfer the signature.', 'verification checks whether $y^rr^s=g^{H(M)}$ for signature $\\sigma=(r, s)$ of the message $M$ and the hash function $H$.']
C
null
Document 1::: ElGamal cryptosystem In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. ElGamal encryption can be defined over any cyclic group G {\displaystyle G} , like multiplicative group of integers modulo n. Its security depends upon the difficulty of a certain problem in G {\displaystyle G} related to computing discrete logarithms. Document 2::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 3::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 4::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 5::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography.
epfl-collab
You are given the task of choosing the parameters of a hash function. What value of the output will you recommend in order to be minimal and secure against second preimage attacks?
['40 bits', '80 bits', '320 bits', '160 bits']
D
null
Document 1::: Minimal perfect hash function For frequently changing S dynamic perfect hash functions may be used at the cost of additional space. The space requirement to store the perfect hash function is in O(n). The important performance parameters for perfect hash functions are the evaluation time, which should be constant, the construction time, and the representation size. Document 2::: Minimal perfect hash function In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. Document 3::: Collision attack In cryptography, a collision attack on a cryptographic hash tries to find two inputs producing the same hash value, i.e. a hash collision. This is in contrast to a preimage attack where a specific target hash value is specified. There are roughly two types of collision attacks: Classical collision attack Find two different messages m1 and m2 such that hash(m1) = hash(m2).More generally: Chosen-prefix collision attack Given two different prefixes p1 and p2, find two appendages m1 and m2 such that hash(p1 ∥ m1) = hash(p2 ∥ m2), where ∥ denotes the concatenation operation. Document 4::: Secure Hash Algorithms SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS). Document 5::: Hashing algorithm A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.
epfl-collab
$\mathrm{GF}(2^k)$ is represented by the set of\dots
['polynomials of degree at most $k-1$ with coefficients in $\\mathbb{Z}_k$.', 'polynomials of degree at most $2$ with coefficients in $\\mathbb{Z}_k$.', 'polynomials of degree at most $k-1$ with binary coefficients.', 'polynomials of degree at most $2^k$ with coefficients in $\\mathbb{Z}$.']
C
null
Document 1::: Binary field GF(2) (also denoted F 2 {\displaystyle \mathbb {F} _{2}} , Z/2Z or Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ) is the finite field of two elements (GF is the initialism of Galois field, another name for finite fields). Notations Z2 and Z 2 {\displaystyle \mathbb {Z} _{2}} may be encountered although they can be confused with the notation of 2-adic integers. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively 0 and 1, as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values true and false. It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Document 2::: Dot product representation of a graph Let G be a graph with vertex set V. Let F be a field, and f a function from V to Fk such that xy is an edge of G if and only if f(x)·f(y) ≥ t. This is the dot product representation of G. The number t is called the dot product threshold, and the smallest possible value of k is called the dot product dimension. Document 3::: Q-binomial theorem In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or q-binomial coefficients) are q-analogs of the binomial coefficients. The Gaussian binomial coefficient, written as ( n k ) q {\displaystyle {\binom {n}{k}}_{q}} or q {\displaystyle {\begin{bmatrix}n\\k\end{bmatrix}}_{q}} , is a polynomial in q with integer coefficients, whose value when q is set to a prime power counts the number of subspaces of dimension k in a vector space of dimension n over F q {\displaystyle \mathbb {F} _{q}} , a finite field with q elements; i.e. it is the number of points in the finite Grassmannian G r ( k , F q n ) {\displaystyle \mathrm {Gr} (k,\mathbb {F} _{q}^{n})} . Document 4::: G2 (mathematics) In mathematics, G2 is the name of three simple Lie groups (a complex form, a compact real form and a split real form), their Lie algebras g 2 , {\displaystyle {\mathfrak {g}}_{2},} as well as some algebraic groups. They are the smallest of the five exceptional simple Lie groups. G2 has rank 2 and dimension 14. It has two fundamental representations, with dimension 7 and 14. The compact form of G2 can be described as the automorphism group of the octonion algebra or, equivalently, as the subgroup of SO(7) that preserves any chosen particular vector in its 8-dimensional real spinor representation (a spin representation). Document 5::: Generalized quadrangle For example, the 3x3 grid with P = {1,2,3,4,5,6,7,8,9} and B = {123, 456, 789, 147, 258, 369} is a trivial GQ with s = 2 and t = 1. A generalized quadrangle with parameters (s,t) is often denoted by GQ(s,t). The smallest non-trivial generalized quadrangle is GQ(2,2), whose representation has been dubbed "the doily" by Stan Payne in 1973.
epfl-collab
Thick the \emph{incorrect} assertion.
['SAS-based cryptography always requires the SAS to be collision-resistant.', 'One can obtain a secure channel from a narrowband authenticated channel using SAS-based cryptography.', 'The goal of SAS-based cryptography is to reduce the length of the string that has to be authenticated.', 'One way to authenticate a SAS is to use your phone.']
A
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: Attacking Faulty Reasoning Attacking Faulty Reasoning is a textbook on logical fallacies by T. Edward Damer that has been used for many years in a number of college courses on logic, critical thinking, argumentation, and philosophy. It explains 60 of the most commonly committed fallacies. Each of the fallacies is concisely defined and illustrated with several relevant examples. For each fallacy, the text gives suggestions about how to address or to "attack" the fallacy when it is encountered. The organization of the fallacies comes from the author’s own fallacy theory, which defines a fallacy as a violation of one of the five criteria of a good argument: the argument must be structurally well-formed; the premises must be relevant; the premises must be acceptable; the premises must be sufficient in number, weight, and kind; there must be an effective rebuttal of challenges to the argument.Each fallacy falls into at least one of Damer's five fallacy categories, which derive from the above criteria. Document 3::: Formal logic Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy "you are either with us or against us; you are not with us; therefore, you are against us". Document 4::: Validity (logic) In logic, specifically in deductive reasoning, an argument is valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. It is not required for a valid argument to have premises that are actually true, but to have premises that, if they were true, would guarantee the truth of the argument's conclusion. Valid arguments must be clearly expressed by means of sentences called well-formed formulas (also called wffs or simply formulas). The validity of an argument can be tested, proved or disproved, and depends on its logical form. Document 5::: Type I error rate In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the failure to reject a null hypothesis that is actually false (also known as a "false negative" finding or conclusion; example: "a guilty person is not convicted"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility if the outcome is not determined by a known, observable causal process. By selecting a low threshold (cut-off) value and modifying the alpha (α) level, the quality of the hypothesis test can be increased. The knowledge of type I errors and type II errors is widely used in medical science, biometrics and computer science.Intuitively, type I errors can be thought of as errors of commission (i.e., the researcher unluckily concludes that something is the fact).
epfl-collab
According to the Kerckhoffs Principle:
['The security of the cryptosystem should \\emph{not} rely on the secrecy of the cryptosystem itself.', 'The internal design of a cryptosystem should be public.', 'If there is a single security hole in a cryptosystem, somebody will discover it.', 'The internal design of a cryptosystem should \\emph{not} be public.']
A
null
Document 1::: Kerckhoffs's principle Kerckhoffs's principle (also called Kerckhoffs's desideratum, assumption, axiom, doctrine or law) of cryptography was stated by Dutch-born cryptographer Auguste Kerckhoffs in the 19th century. The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. This concept is widely embraced by cryptographers, in contrast to security through obscurity, which is not. Kerckhoffs's principle was phrased by American mathematician Claude Shannon as "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". Document 2::: Kerckhoffs's principle In that form, it is called Shannon's maxim. Another formulation by American researcher and professor Steven M. Bellovin is: In other words — design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) Document 3::: Steven Kerckhoff Steven Paul Kerckhoff (born 1952) is a professor of mathematics at Stanford University, who works on hyperbolic 3-manifolds and Teichmüller spaces. He received his Ph.D. in mathematics from Princeton University in 1978, under the direction of William Thurston. Among his most famous results is his resolution of the Nielsen realization problem, a 1932 conjecture by Jakob Nielsen. Along with William J. Floyd, he wrote large parts of Thurston's influential Princeton lecture notes, and he is well known for his work (some of which is joint with Craig Hodgson) in exploring and clarifying Thurston's hyperbolic Dehn surgery. Kerckhoff is one of four academics from Stanford University, along with Gunnar Carlsson, Ralph Cohen, and R. James Milgram, who were instrumental in developing the controversial California Mathematics Academic Content Standards for the State Board of Education. Document 4::: K.p method In solid-state physics, the k·p perturbation theory is an approximated semi-empirical approach for calculating the band structure (particularly effective mass) and optical properties of crystalline solids. It is pronounced "k dot p", and is also called the "k·p method". This theory has been applied specifically in the framework of the Luttinger–Kohn model (after Joaquin Mazdak Luttinger and Walter Kohn), and of the Kane model (after Evan O. Kane). Document 5::: KKT conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a (global) saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.
epfl-collab
KEM \dots
['stands for Keyless Encryption Mechanism.', 'is a symmetric-key algorithm.', 'is a public-key algorithm.', 'is a Korean encryption mechanism.']
C
null
Document 1::: Dot plot (statistics) A dot chart or dot plot is a statistical chart consisting of data points plotted on a fairly simple scale, typically using filled in circles. There are two common, yet very different, versions of the dot chart. The first has been used in hand-drawn (pre-computer era) graphs to depict distributions going back to 1884. The other version is described by William S. Cleveland as an alternative to the bar chart, in which dots are used to depict the quantitative values (e.g. counts) associated with categorical variables. Document 2::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 3::: Lewis Structure Lewis structures – also called Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDs) – are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. A Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. The Lewis structure was named after Gilbert N. Lewis, who introduced it in his 1916 article The Atom and the Molecule. Document 4::: Nine dots problem The nine dots puzzle is a mathematical puzzle whose task is to connect nine squarely arranged points with a pen by four (or fewer) straight lines without lifting the pen. The puzzle has appeared under various other names over the years. Document 5::: Kinetic energy metamorphosis Kinetic energy metamorphosis (KEM) is a recently discovered tribological process of gradual crystal re-orientation and foliation of component minerals in certain rocks. It is caused by very high, localized application of kinetic energy. The required energy may be provided by prolonged battery of fluvially propelled bed load of cobbles, by glacial abrasion, tectonic deformation, and even by human action. It can result in the formation of laminae on specific metamorphic rocks that, while being chemically similar to the protolith, differ significantly in appearance and in their resistance to weathering or deformation. These tectonite layers are of whitish color and tend to survive granular or mass exfoliation much longer than the surrounding protolith.
epfl-collab
Tick the \emph{false} assertion. Two-keys triple DES\dots
['is vulnerable to a certain variant of a meet-in-the-middle attacks.', 'is less secure than AES.', 'is as secure as a block cipher using a key twice longer.', 'is more secure than double encryption.']
C
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: Two pass verification Two-pass verification, also called double data entry, is a data entry quality control method that was originally employed when data records were entered onto sequential 80-column Hollerith cards with a keypunch. In the first pass through a set of records, the data keystrokes were entered onto each card as the data entry operator typed them. On the second pass through the batch, an operator at a separate machine, called a verifier, entered the same data. The verifier compared the second operator's keystrokes with the contents of the original card. Document 3::: Double Ratchet Algorithm In cryptography, the Double Ratchet Algorithm (previously referred to as the Axolotl Ratchet) is a key management algorithm that was developed by Trevor Perrin and Moxie Marlinspike in 2013. It can be used as part of a cryptographic protocol to provide end-to-end encryption for instant messaging. After an initial key exchange it manages the ongoing renewal and maintenance of short-lived session keys. It combines a cryptographic so-called "ratchet" based on the Diffie–Hellman key exchange (DH) and a ratchet based on a key derivation function (KDF), such as a hash function, and is therefore called a double ratchet. The algorithm provides forward secrecy for messages, and implicit renegotiation of forward keys; properties for which the protocol is named. Document 4::: Logical NAND The Sheffer stroke of P {\displaystyle P} and Q {\displaystyle Q} is the negation of their conjunction By De Morgan's laws, this is also equivalent to the disjunction of the negations of P {\displaystyle P} and Q {\displaystyle Q} Document 5::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values.
epfl-collab
Tick the \textbf{true} statement regarding $\mathbb{Z}_p^*$, where $p$ is an arbitrary prime number.
['For any $x \\in \\mathbb{Z}_p^*$ we have $x^{p}=1 \\pmod p$', 'It is a group of prime order when $p>3$.', 'It is isomorphic to $\\mathbb{Z}_n^*$ for all $n >0$.', 'It has $\\varphi(p-1)$ generators.']
D
null
Document 1::: Supersingular prime (algebraic number theory) In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp. Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). Document 2::: Z* theorem In mathematics, George Glauberman's Z* theorem is stated as follows: Z* theorem: Let G be a finite group, with O(G) being its maximal normal subgroup of odd order. If T is a Sylow 2-subgroup of G containing an involution not conjugate in G to any other element of T, then the involution lies in Z*(G), which is the inverse image in G of the center of G/O(G). This generalizes the Brauer–Suzuki theorem (and the proof uses the Brauer–Suzuki theorem to deal with some small cases). Document 3::: Prime element An element p of a commutative ring R is said to be prime if it is not the zero element or a unit and whenever p divides ab for some a and b in R, then p divides a or p divides b. With this definition, Euclid's lemma is the assertion that prime numbers are prime elements in the ring of integers. Equivalently, an element p is prime if, and only if, the principal ideal (p) generated by p is a nonzero prime ideal. (Note that in an integral domain, the ideal (0) is a prime ideal, but 0 is an exception in the definition of 'prime element'.) Document 4::: Fermat pseudoprime Fermat's little theorem states that if p is prime and a is coprime to p, then ap−1 − 1 is divisible by p. For an integer a > 1, if a composite integer x divides ax−1 − 1, then x is called a Fermat pseudoprime to base a.: Def. 3.32 In other words, a composite integer is a Fermat pseudoprime to base a if it successfully passes the Fermat primality test for the base a. The false statement that all numbers that pass the Fermat primality test for base 2, are prime, is called the Chinese hypothesis. The smallest base-2 Fermat pseudoprime is 341. It is not a prime, since it equals 11·31, but it satisfies Fermat's little theorem: 2340 ≡ 1 (mod 341) and thus passes the Fermat primality test for the base 2. Document 5::: Euler pseudoprime The equation can be tested rather quickly, which can be used for probabilistic primality testing. These tests are twice as strong as tests based on Fermat's little theorem. Every Euler pseudoprime is also a Fermat pseudoprime. It is not possible to produce a definite test of primality based on whether a number is an Euler pseudoprime because there exist absolute Euler pseudoprimes, numbers which are Euler pseudoprimes to every base relatively prime to themselves. The absolute Euler pseudoprimes are a subset of the absolute Fermat pseudoprimes, or Carmichael numbers, and the smallest absolute Euler pseudoprime is 1729 = 7×13×19.
epfl-collab
Tick the \textbf{false} statement regarding the DES round function.
['There is an expansion operation $E$ from 32 to 48 bits.', 'There is a permutation $P$ on 32-bits.', 'There are $8$ identical S-boxes (substitution boxes) of size $6 \\times 4$.', 'A round key is XORed to an internal register.']
C
null
Document 1::: Round constant . {\displaystyle R_{1},R_{2},...} are implemented using the same function, parameterized by the round constant and, for block ciphers, the round key from the key schedule. Parameterization is essential to reduce the self-similarity of the cipher, which could lead to slide attacks.Increasing the number of rounds "almost always" protects against differential and linear cryptanalysis, as for these tools the effort grows exponentially with the number of rounds. Document 2::: Round constant In cryptography, a round or round function is a basic transformation that is repeated (iterated) multiple times inside the algorithm. Splitting a large algorithmic function into rounds simplifies both implementation and cryptanalysis.For example, encryption using an oversimplified three-round cipher can be written as C = R 3 ( R 2 ( R 1 ( P ) ) ) {\displaystyle C=R_{3}(R_{2}(R_{1}(P)))} , where C is the ciphertext and P is the plaintext. Typically, rounds R 1 , R 2 , . . Document 3::: Rolling hash A rolling hash (also known as recursive hashing or rolling checksum) is a hash function where the input is hashed in a window that moves through the input. A few hash functions allow a rolling hash to be computed very quickly—the new hash value is rapidly calculated given only the old hash value, the old value removed from the window, and the new value added to the window—similar to the way a moving average function can be computed much more quickly than other low-pass filters; and similar to the way a Zobrist hash can be rapidly updated from the old hash value. One of the main applications is the Rabin–Karp string search algorithm, which uses the rolling hash described below. Another popular application is the rsync program, which uses a checksum based on Mark Adler's adler-32 as its rolling hash. Document 4::: Square (cipher) In cryptography, Square (sometimes written SQUARE) is a block cipher invented by Joan Daemen and Vincent Rijmen. The design, published in 1997, is a forerunner to Rijndael, which has been adopted as the Advanced Encryption Standard. Square was introduced together with a new form of cryptanalysis discovered by Lars Knudsen, called the "Square attack". The structure of Square is a substitution–permutation network with eight rounds, operating on 128-bit blocks and using a 128-bit key. Square is not patented. Document 5::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values.
epfl-collab
Which of the following ciphers is based on arithmetics over the finite field $\mathrm{GF}(2^8)$?
['A5/1', 'DES', 'AES', 'RC4']
C
null
Document 1::: Finite field arithmetic In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers. There are infinitely many different finite fields. Their number of elements is necessarily of the form pn where p is a prime number and n is a positive integer, and two finite fields of the same size are isomorphic. The prime p is called the characteristic of the field, and the positive integer n is called the dimension of the field over its prime field. Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments. Document 2::: Binary field GF(2) (also denoted F 2 {\displaystyle \mathbb {F} _{2}} , Z/2Z or Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ) is the finite field of two elements (GF is the initialism of Galois field, another name for finite fields). Notations Z2 and Z 2 {\displaystyle \mathbb {Z} _{2}} may be encountered although they can be confused with the notation of 2-adic integers. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively 0 and 1, as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values true and false. It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Document 3::: Fermat (computer algebra system) To implement (most) finite fields, the user finds an irreducible monic polynomial in a symbolic variable, say p ( t 1 ) , {\displaystyle p(t_{1}),} and commands Fermat to mod out by it. This may be continued recursively, q ( t 2 , t 1 ) , {\displaystyle q(t_{2},t_{1}),} etc. Low level data structures are set up to facilitate arithmetic and gcd over this newly created ground field. Two special fields, G F ( 2 8 ) {\displaystyle GF(2^{8})} and G F ( 2 16 ) , {\displaystyle GF(2^{16}),} are more efficiently implemented at the bit level. Document 4::: Function field sieve Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two. The discrete logarithm problem in a finite field consists of solving the equation a x = b {\displaystyle a^{x}=b} for a , b ∈ F p n {\displaystyle a,b\in \mathbb {F} _{p^{n}}} , p {\displaystyle p} a prime number and n {\displaystyle n} an integer. The function f: F p n → F p n , x ↦ a x {\displaystyle f:\mathbb {F} _{p^{n}}\to \mathbb {F} _{p^{n}},x\mapsto a^{x}} for a fixed a ∈ F p n {\displaystyle a\in \mathbb {F} _{p^{n}}} is a one-way function used in cryptography. Several cryptographic methods are based on the DLP such as the Diffie-Hellman key exchange, the El Gamal cryptosystem and the Digital Signature Algorithm. Document 5::: Field (algebra) Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The relation of two fields is expressed by the notion of a field extension. Galois theory, initiated by Évariste Galois in the 1830s, is devoted to understanding the symmetries of field extensions.
epfl-collab
Ensuring the information integrity means that\dots
['\\dots the information should not leak to any unexpected party.', '\\dots DES is secure.', '\\dots the information should make clear who the author of it is.', '\\dots the information must be protected against any malicious modification.']
D
null
Document 1::: Data fidelity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing. It is at times used as a proxy term for data quality, while data validation is a prerequisite for data integrity. Data integrity is the opposite of data corruption. Document 2::: Data fidelity In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties. Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, and human error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in a life-critical system. Document 3::: Clark–Wilson model The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system. The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model uses security labels to grant access to objects via transformation procedures and a restricted interface model. Document 4::: Check Point Integrity Check Point Integrity is an endpoint security software product developed by Check Point Software Technologies. It is designed to protect personal computers and the networks they connect to from computer worms, Trojan horses, spyware, and intrusion attempts by hackers. The software aims to stop new PC threats and attacks before signature updates have been installed on the PC. The software includes. Document 5::: Bit-count integrity In telecommunication, the term bit-count integrity (BCI) has the following meanings: In message communications, the preservation of the exact number of bits that are in the original message. In connection-oriented services, preservation of the number of bits per unit time.Note: Bit-count integrity is not the same as bit integrity, which requires that the delivered bits correspond exactly with the original bits. Source: from Federal Standard 1037C and from MIL-STD-188
epfl-collab
Given an odd prime $p$, for any $a \in \mathbb{Z}_p$ the equation
['$x^2 - a = 0$ always has a solution.', '$x^2 - a = 0$ has at most two solutions.', '$x^2 - a = 0$ has exactly two solutions.', '$x^2 - a = 0$ may have four solutions.']
B
null
Document 1::: Euler pseudoprime In arithmetic, an odd composite integer n is called an Euler pseudoprime to base a, if a and n are coprime, and a ( n − 1 ) / 2 ≡ ± 1 ( mod n ) {\displaystyle a^{(n-1)/2}\equiv \pm 1{\pmod {n}}} (where mod refers to the modulo operation). The motivation for this definition is the fact that all prime numbers p satisfy the above equation which can be deduced from Fermat's little theorem. Fermat's theorem asserts that if p is prime, and coprime to a, then ap−1 ≡ 1 (mod p). Suppose that p>2 is prime, then p can be expressed as 2q + 1 where q is an integer. Document 2::: Binary quadratic form This choice is motivated by their status as the driving force behind the development of algebraic number theory. Since the late nineteenth century, binary quadratic forms have given up their preeminence in algebraic number theory to quadratic and more general number fields, but advances specific to binary quadratic forms still occur on occasion. Pierre Fermat stated that if p is an odd prime then the equation p = x 2 + y 2 {\displaystyle p=x^{2}+y^{2}} has a solution iff p ≡ 1 ( mod 4 ) {\displaystyle p\equiv 1{\pmod {4}}} , and he made similar statement about the equations p = x 2 + 2 y 2 {\displaystyle p=x^{2}+2y^{2}} , p = x 2 + 3 y 2 {\displaystyle p=x^{2}+3y^{2}} , p = x 2 − 2 y 2 {\displaystyle p=x^{2}-2y^{2}} and p = x 2 − 3 y 2 {\displaystyle p=x^{2}-3y^{2}} . Document 3::: Supersingular prime (algebraic number theory) In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp. Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). Document 4::: Quadratic residue symbol Let p {\displaystyle p} be an odd prime number. An integer a {\displaystyle a} is a quadratic residue modulo p {\displaystyle p} if it is congruent to a perfect square modulo p {\displaystyle p} and is a quadratic nonresidue modulo p {\displaystyle p} otherwise. The Legendre symbol is a function of a {\displaystyle a} and p {\displaystyle p} defined as ( a p ) = { 1 if a is a quadratic residue modulo p and a ≢ 0 ( mod p ) , − 1 if a is a quadratic nonresidue modulo p , 0 if a ≡ 0 ( mod p ) . {\displaystyle \left({\frac {a}{p}}\right)={\begin{cases}1&{\text{if }}a{\text{ is a quadratic residue modulo }}p{\text{ and }}a\not \equiv 0{\pmod {p}},\\-1&{\text{if }}a{\text{ is a quadratic nonresidue modulo }}p,\\0&{\text{if }}a\equiv 0{\pmod {p}}.\end{cases}}} Legendre's original definition was by means of the explicit formula ( a p ) ≡ a p − 1 2 ( mod p ) and ( a p ) ∈ { − 1 , 0 , 1 } . Document 5::: Euler pseudoprime Thus, a(2q+1) − 1 ≡ 1 (mod p), which means that a2q − 1 ≡ 0 (mod p). This can be factored as (aq − 1)(aq + 1) ≡ 0 (mod p), which is equivalent to a(p−1)/2 ≡ ±1 (mod p).
epfl-collab
Which one of the following notions is not in the fundamental trilogy of cryptography?
['authentication', 'integrity', 'privacy', 'confidentiality']
C
null
Document 1::: Non-commutative cryptography Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications. Document 2::: Standard model (cryptography) In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model. Cryptographic schemes are usually based on complexity assumptions, which state that some problems, such as factorization, cannot be solved in polynomial time. Schemes that can be proven secure using only complexity assumptions are said to be secure in the standard model. Document 3::: Cryptosystem In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Document 4::: Non-commutative cryptography In contrast to non-commutative cryptography, the currently widely used public-key cryptosystems like RSA cryptosystem, Diffie–Hellman key exchange and elliptic curve cryptography are based on number theory and hence depend on commutative algebraic structures. Non-commutative cryptographic protocols have been developed for solving various cryptographic problems like key exchange, encryption-decryption, and authentication. These protocols are very similar to the corresponding protocols in the commutative case. Document 5::: Three-stage quantum cryptography protocol The three-stage quantum cryptography protocol, also known as Kak's three-stage protocol is a method of data encryption that uses random polarization rotations by both Alice and Bob, the two authenticated parties, that was proposed by Subhash Kak. In principle, this method can be used for continuous, unbreakable encryption of data if single photons are used. It is different from methods of QKD (quantum key distribution) for it can be used for direct encryption of data, although it could also be used for exchanging keys. The basic idea behind this method is that of sending secrets (or valuables) through an unreliable courier by having both Alice and Bob place their locks on the box containing the secret, which is also called double-lock cryptography.
epfl-collab
Consider a mobile station (MS) with a SIM card associated to a home network (HN). The MS tries to connect to a visited network (VN). In the GSM authentication, who knows the key $K_i$?
['SIM only.', 'SIM and HN.', 'SIM, MS, VN and HN.', 'SIM, MS and HN.']
B
null
Document 1::: Cell Global Identity Cell Global Identity (CGI) is a globally unique identifier for a Base Transceiver Station in mobile phone networks. It consists of four parts: Mobile Country Code (MCC), Mobile Network Code (MNC), Location Area Code (LAC) and Cell Identification (CI). It is an integral part of 3GPP specifications for mobile networks, for example, for identifying individual base stations to "handover" ongoing phone calls between separately controlled base stations, or between different mobile technologies.MCC and MNC make up a PLMN identifier, and PLMN and LAC make up a location area identity (LAI), which uniquely identifies a Location Area of a given operator's network. So a CGI can be seen as a LAI with added Cell Identification, to further identify the individual base station of that Location Area. Document 2::: COMP128 The COMP128 algorithms are implementations of the A3 and A8 functions defined in the GSM standard. A3 is used to authenticate the mobile station to the network. A8 is used to generate the session key used by A5 to encrypt the data transmitted between the mobile station and the BTS. There are three versions of COMP128. Document 3::: Key selection vector A Key Selection Vector (KSV) is a numerical identifier associated with a Device Key Set which is distributed by a Licensor or its designee to Adopters and is used to support authentication of Licensed Products and Revocation as part of the HDCP copy protection system. The KSV is used to generate confidential keys, specifically used in the Restricted Authentication process of HDCP. Restricted Authentication is an AKE method for devices with limited computing resources. This method is used by copying devices of any kind (such as DV recorders or D-VHS recorders) and devices communicating with them for authenticating protected content. The restricted authentication protocol uses asymmetric key management and common key cryptography, and relies on the use of shared secrets and hash functions to respond to a random challenge. Document 4::: Addressable Systems In the case of simple hardware devices like the pager, the address is simply the electronic serial number (and later IMEI/MEID) in its firmware, or physically manufactured into its circuitry. In the case of GSM mobile phones, it also includes the subscriber identity module, which is also present as a smart card on satellite TV receivers, or a different PCMCIA CableCARD for cable TV. Addressing and encryption are used together for conditional access to different TV channel bundles which a pay-TV customer has or has not paid for. Addressing is also done in software at higher levels such as IP addresses, which can be dynamically allocated. Even physically separate devices are now addressable, such as to enforce revocation lists for digital restrictions, or to use the former DIVX DVD video rentals, although the latter only used its identity to "phone home" for billing purposes. Document 5::: Enhanced Network Selection Enhanced Network Selection (sometimes referred to as Enhanced Network Service, or simply ENS) extends GSM by making it possible for a GSM cellular device (e.g., handset) to be "homed" OTA (over the air) to different networks. This made it possible for Cingular Wireless, while it was still operating two networks post merger with AT&T Wireless, to "home" a given cellular device to either the "orange" (old Cingular Wireless) or "blue" (old AT&T Wireless) network. That can improve performance for certain customers because without ENS a GSM device will only select a non-home network when a "usable" home network signal doesn't exist, even when a non-home network has a better signal or when the home network has no available capacity.
epfl-collab
Select \emph{incorrect} statement. Brithday paradox
['can be implemented using a table of size $\\Theta\\sqrt{N}$', 'is a brute force technique.', 'can be implemented with constant memory using Rho ($\\rho$) method.', 'is used to recover the secret key of AES in $2^{64}$ computations.']
D
null
Document 1::: Kleene–Rosser paradox In mathematics, the Kleene–Rosser paradox is a paradox that shows that certain systems of formal logic are inconsistent, in particular the version of Haskell Curry's combinatory logic introduced in 1930, and Alonzo Church's original lambda calculus, introduced in 1932–1933, both originally intended as systems of formal logic. The paradox was exhibited by Stephen Kleene and J. B. Rosser in 1935. Document 2::: Outcomes paradox == Background == Schizophrenia is a severe, chronic disorder characterised by disturbances in thought, perception and behaviour. One way a psychiatrist can diagnose it is if an individual has experienced positive symptoms (e.g., hallucinations) and/or negative symptoms (e.g., apathy) consistently for a month. Its treatment usually involves a combination of cognitive behavioural therapy and antipsychotic medication. Although the treatment for the disorder is the same cross-culturally, treatment success rates differ cross-culturally, and this phenomenon is known as the outcomes paradox. Document 3::: Berry's paradox The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic". The paradox was called "Richard's paradox" by Jean-Yves Girard. Document 4::: Liar's paradox It is still generally called the "liar paradox" although abstraction is made precisely from the liar making the statement. Trying to assign to this statement, the strengthened liar, a classical binary truth value leads to a contradiction. If "this sentence is false" is true, then it is false, but the sentence states that it is false, and if it is false, then it must be true, and so on. Document 5::: Simpson's Paradox Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling (e.g., through cluster analysis).
epfl-collab
The Kerckhoffs principle says:
['security should not rely on the secrecy of the cryptosystem itself.', 'the speed of CPUs doubles every 18 months', 'security should not rely on the secrecy of the key.', 'cryptosystems must be published.']
A
null
Document 1::: Kerckhoffs's principle Kerckhoffs's principle (also called Kerckhoffs's desideratum, assumption, axiom, doctrine or law) of cryptography was stated by Dutch-born cryptographer Auguste Kerckhoffs in the 19th century. The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. This concept is widely embraced by cryptographers, in contrast to security through obscurity, which is not. Kerckhoffs's principle was phrased by American mathematician Claude Shannon as "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". Document 2::: Kerckhoffs's principle In that form, it is called Shannon's maxim. Another formulation by American researcher and professor Steven M. Bellovin is: In other words — design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) Document 3::: Steven Kerckhoff Steven Paul Kerckhoff (born 1952) is a professor of mathematics at Stanford University, who works on hyperbolic 3-manifolds and Teichmüller spaces. He received his Ph.D. in mathematics from Princeton University in 1978, under the direction of William Thurston. Among his most famous results is his resolution of the Nielsen realization problem, a 1932 conjecture by Jakob Nielsen. Along with William J. Floyd, he wrote large parts of Thurston's influential Princeton lecture notes, and he is well known for his work (some of which is joint with Craig Hodgson) in exploring and clarifying Thurston's hyperbolic Dehn surgery. Kerckhoff is one of four academics from Stanford University, along with Gunnar Carlsson, Ralph Cohen, and R. James Milgram, who were instrumental in developing the controversial California Mathematics Academic Content Standards for the State Board of Education. Document 4::: Krogh's Principle Krogh's principle states that "for such a large number of problems there will be some animal of choice, or a few such animals, on which it can be most conveniently studied." This concept is central to those disciplines of biology that rely on the comparative method, such as neuroethology, comparative physiology, and more recently functional genomics. Document 5::: KKT conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a (global) saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.
epfl-collab
Tick the \emph{correct} assertion. The Vernam cipher provides \dots
['integrity.', 'authenticity.', 'none of the mentioned properties.', 'confidentiality.']
D
null
Document 1::: CryptMT In cryptography, CryptMT is a stream cipher algorithm which internally uses the Mersenne twister. It was developed by Makoto Matsumoto, Mariko Hagita, Takuji Nishimura and Mutsuo Saito and is patented. It has been submitted to the eSTREAM project of the eCRYPT network. In that submission to eSTREAM, the authors also included another cipher named Fubuki, which also uses the Mersenne twister. Document 2::: Verhoeff algorithm The Verhoeff algorithm is a checksum for error detection first published by Dutch mathematician Jacobus Verhoeff in 1969. It was the first decimal check digit algorithm which detects all single-digit errors, and all transposition errors involving two adjacent digits, which was at the time thought impossible with such a code. The method was independently discovered by H. Peter Gumm in 1985, this time including a formal proof and an extension to any base. Document 3::: Padding oracle attack In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks. Document 4::: ISAAC (cipher) ISAAC (indirection, shift, accumulate, add, and count) is a cryptographically secure pseudorandom number generator and a stream cipher designed by Robert J. Jenkins Jr. in 1993. The reference implementation source code was dedicated to the public domain. "I developed (...) tests to break a generator, and I developed the generator to pass the tests. The generator is ISAAC." Document 5::: LEX (cipher) Designed by Alex Biryukov, LEX is a Phase 2 Focus candidate for the eSTREAM project. It is not patented. A new revision of LEX protects against a slide attack found in an earlier version.
epfl-collab
What is the average complexity of exhaustive search when the key is distributed uniformly at random over $N$ keys?
['$\\frac{N+1}{2}$', '$\\sqrt{N}$', '$\\log N$', '$2^N$']
A
null
Document 1::: Search algorithms Finally, hashing directly maps keys to records based on a hash function.Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of O(log n), or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space. Document 2::: Average-case complexity Suppose, however, that the inputs are drawn randomly from the uniform distribution of strings with length n, and that A runs in time n2 on all inputs except the string 1n for which A takes time 2n. Then it can be easily checked that the expected running time of A is polynomial but the expected running time of B is exponential.To create a more robust definition of average-case efficiency, it makes sense to allow an algorithm A to run longer than polynomial time on some inputs but the fraction of inputs on which A requires larger and larger running time becomes smaller and smaller. This intuition is captured in the following formula for average polynomial running time, which balances the polynomial trade-off between running time and fraction of inputs: Pr x ∈ R D n ≤ p ( n ) t ϵ {\displaystyle \Pr _{x\in _{R}D_{n}}\left\leq {\frac {p(n)}{t^{\epsilon }}}} for every n, t > 0 and polynomial p, where tA(x) denotes the running time of algorithm A on input x, and ε is a positive constant value. Alternatively, this can be written as E x ∈ R D n ≤ C {\displaystyle E_{x\in _{R}D_{n}}\left\leq C} for some constants C and ε, where n = |x|. In other words, an algorithm A has good average-case complexity if, after running for tA(n) steps, A can solve all but a nc/(tA(n))ε fraction of inputs of length n, for some ε, c > 0. Document 3::: Average case complexity In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs. There are three primary motivations for studying average-case complexity. First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Document 4::: Average case complexity Alternatively, a randomized algorithm can be used. The analysis of such algorithms leads to the related notion of an expected complexity. : 28 Document 5::: Probabilistic analysis For non-probabilistic, more specifically deterministic, algorithms, the most common types of complexity estimates are the average-case complexity and the almost-always complexity. To obtain the average-case complexity, given an input distribution, the expected time of an algorithm is evaluated, whereas for the almost-always complexity estimate, it is evaluated that the algorithm admits a given complexity estimate that almost surely holds. In probabilistic analysis of probabilistic (randomized) algorithms, the distributions or average of all possible choices in randomized steps is also taken into account, in addition to the input distributions.
epfl-collab
Select \emph{incorrect} statement. Generic attacks on DES include
['time memory tradeof against 2 key Triple DES.', 'known plaintext attack by Van Oorschot-Wiener agains 2 key Triple DES.', 'collision attack against 3 key Triple DES.', 'meet in the middle attack against 3 key Triple DES.']
C
null
Document 1::: Padding oracle attack In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks. Document 2::: Brute force attack A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier. When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Document 3::: Session poisoning Session poisoning (also referred to as "session data pollution" and "session modification") is a method to exploit insufficient input validation within a server application. Typically a server application that is vulnerable to this type of exploit will copy user input into session variables. The underlying vulnerability is a state management problem: shared state, race condition, ambiguity in use or plain unprotected modifications of state values. Session poisoning has been demonstrated in server environments where different, non-malicious applications (scripts) share the same session states but where usage differ, causing ambiguity and race conditions. Session poisoning has been demonstrated in scenarios where attacker is able to introduce malicious scripts into the server environment, which is possible if attacker and victim share a web host. Document 4::: Bit-flipping attack A bit-flipping attack is an attack on a cryptographic cipher in which the attacker can change the ciphertext in such a way as to result in a predictable change of the plaintext, although the attacker is not able to learn the plaintext itself. Note that this type of attack is not—directly—against the cipher itself (as cryptanalysis of it would be), but against a particular message or series of messages. In the extreme, this could become a Denial of service attack against all messages on a particular channel using that cipher.The attack is especially dangerous when the attacker knows the format of the message. In such a situation, the attacker can turn it into a similar message but one in which some important information is altered. Document 5::: Distinguishing attack In cryptography, a distinguishing attack is any form of cryptanalysis on data encrypted by a cipher that allows an attacker to distinguish the encrypted data from random data. Modern symmetric-key ciphers are specifically designed to be immune to such an attack. In other words, modern encryption schemes are pseudorandom permutations and are designed to have ciphertext indistinguishability. If an algorithm is found that can distinguish the output from random faster than a brute force search, then that is considered a break of the cipher. A similar concept is the known-key distinguishing attack, whereby an attacker knows the key and can find a structural property in the cipher, where the transformation from plaintext to ciphertext is not random.
epfl-collab
AES\dots
['\\dots has a fixed key length \\emph{and} a variable block length.', '\\dots has a variable key length \\emph{and} a fixed block length.', '\\dots has a fixed key length \\emph{and} a fixed block length.', '\\dots has a variable key length \\emph{and} a variable block length.']
B
null
Document 1::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 2::: X-ray induced Auger electron spectroscopy Auger electron spectroscopy (AES; pronounced in French) is a common analytical technique used specifically in the study of surfaces and, more generally, in the area of materials science. It is a form of electron spectroscopy that relies on the Auger effect, based on the analysis of energetic electrons emitted from an excited atom after a series of internal relaxation events. The Auger effect was discovered independently by both Lise Meitner and Pierre Auger in the 1920s. Document 3::: UES (cipher) In cryptography, UES (Universal Encryption Standard) is a block cipher designed in 1999 by Helena Handschuh and Serge Vaudenay. They proposed it as a transitional step, to prepare for the completion of the AES process. UES was designed with the same interface as AES: a block size of 128 bits and key size of 128, 192, or 256 bits. It consists of two parallel Triple DES encryptions on the halves of the block, with key whitening and key-dependent swapping of bits between the halves. The key schedule is taken from DEAL. Document 4::: Flame emission spectroscopy Atomic emission spectroscopy (AES) is a method of chemical analysis that uses the intensity of light emitted from a flame, plasma, arc, or spark at a particular wavelength to determine the quantity of an element in a sample. The wavelength of the atomic spectral line in the emission spectrum gives the identity of the element while the intensity of the emitted light is proportional to the number of atoms of the element. The sample may be excited by various methods. Document 5::: SSS (cipher) In cryptography, SSS is a stream cypher algorithm developed by Gregory Rose, Philip Hawkes, Michael Paddon, and Miriam Wiggers de Vries. It includes a message authentication code feature. It has been submitted to the eSTREAM Project of the eCRYPT network. It has not selected for focus nor for consideration during Phase 2; it has been 'archived'. == References ==
epfl-collab
Given that $100000000003$ is prime, what is the cardinality of $\mathbf{Z}_{200000000006}^*$?
['$100000000002$', '$100000000003$', '$200000000006$', '$2$']
A
null
Document 1::: Mersenne numbers The largest known prime number, 282,589,933 − 1, is a Mersenne prime. Since 1997, all newly found Mersenne primes have been discovered by the Great Internet Mersenne Prime Search, a distributed computing project. In December 2020, a major milestone in the project was passed after all exponents below 100 million were checked at least once. Document 2::: 3511 3511 (three thousand, five hundred and eleven) is the natural number following 3510 and preceding 3512. 3511 is a prime number, and is also an emirp: a different prime when its digits are reversed.3511 is a Wieferich prime, found to be so by N. G. W. H. Beeger in 1922 and the largest known – the only other being 1093. If any other Wieferich primes exist, they must be greater than 6.7×1015.3511 is the 27th centered decagonal number. == References == Document 3::: Palindromic prime In mathematics, a palindromic prime (sometimes called a palprime) is a prime number that is also a palindromic number. Palindromicity depends on the base of the number system and its notational conventions, while primality is independent of such concerns. The first few decimal palindromic primes are: 2, 3, 5, 7, 11, 101, 131, 151, 181, 191, 313, 353, 373, 383, 727, 757, 787, 797, 919, 929, … (sequence A002385 in the OEIS)Except for 11, all palindromic primes have an odd number of digits, because the divisibility test for 11 tells us that every palindromic number with an even number of digits is a multiple of 11. It is not known if there are infinitely many palindromic primes in base 10. The largest known as of October 2021 is 101888529 - 10944264 - 1.which has 1,888,529 digits, and was found on 18 October 2021 by Ryan Propper and Serge Batalov. On the other hand, it is known that, for any base, almost all palindromic numbers are composite, i.e. the ratio between palindromic composites and all palindromes less than n tends to 1. Document 4::: Partition number Furthermore p(n) = 0 when n is negative. The first few values of the partition function, starting with p(0) = 1, are: Some exact values of p(n) for larger values of n include: As of June 2022, the largest known prime number among the values of p(n) is p(1289844341), with 40,000 decimal digits. Until March 2022, this was also the largest prime that has been proved using elliptic curve primality proving. Document 5::: Supersingular prime (moonshine theory) In the mathematical branch of moonshine theory, a supersingular prime is a prime number that divides the order of the Monster group M, which is the largest sporadic simple group. There are precisely fifteen supersingular prime numbers: the first eleven primes (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, and 31), as well as 41, 47, 59, and 71. (sequence A002267 in the OEIS) The non-supersingular primes are 37, 43, 53, 61, 67, and any prime number greater than or equal to 73. Supersingular primes are related to the notion of supersingular elliptic curves as follows.
epfl-collab
Select the \emph{incorrect} statement. Elliptic Curve Diffie-Hellman is
['based on the difficulty of computing the discrete logarithm in EC.', 'used for epassports.', 'based on the difficulty of factoring the polynomial of EC.', 'used in Bluetooth 2.1.']
C
null
Document 1::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Document 2::: Diffie–Hellman problem The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken. Document 3::: Elliptic Curve Cryptography Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography (based on plain Galois fields) to provide equivalent security.Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization. Document 4::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 5::: Computational Diffie–Hellman assumption The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key.
epfl-collab
In which attack scenario does the adversary ask for the decryption of selected messages?
['Ciphertext only attack', 'Known plaintext attack', 'Chosen plaintext attack', 'Chosen ciphertext attack']
D
null
Document 1::: Padding oracle attack In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks. Document 2::: Bit-flipping attack A bit-flipping attack is an attack on a cryptographic cipher in which the attacker can change the ciphertext in such a way as to result in a predictable change of the plaintext, although the attacker is not able to learn the plaintext itself. Note that this type of attack is not—directly—against the cipher itself (as cryptanalysis of it would be), but against a particular message or series of messages. In the extreme, this could become a Denial of service attack against all messages on a particular channel using that cipher.The attack is especially dangerous when the attacker knows the format of the message. In such a situation, the attacker can turn it into a similar message but one in which some important information is altered. Document 3::: Adversary (cryptography) There are several types of adversaries depending on what capabilities or intentions they are presumed to have. Adversaries may be computationally bounded or unbounded (i.e. in terms of time and storage resources), eavesdropping or Byzantine (i.e. passively listening on or actively corrupting data in the channel), static or adaptive (i.e. having fixed or changing behavior), mobile or non-mobile (e.g. in the context of network security)and so on. In actual security practice, the attacks assigned to such adversaries are often seen, so such notional analysis is not merely theoretical. Document 4::: Adversary (cryptography) In cryptography, an adversary (rarely opponent, enemy) is a malicious entity whose aim is to prevent the users of the cryptosystem from achieving their goal (primarily privacy, integrity, and availability of data). An adversary's efforts might take the form of attempting to discover secret data, corrupting some of the data in the system, spoofing the identity of a message sender or receiver, or forcing system downtime. Actual adversaries, as opposed to idealized ones, are referred to as attackers. The former term predominates in the cryptographic and the latter in the computer security literature. Document 5::: Cryptographic algorithm An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging.
epfl-collab
An element of the finite field $\mathrm{GF}(2^8)$ is usually represented by\dots
['\\dots one hexadecimal digit.', '\\dots two hexadecimal digits.', '\\dots an irreducible polynomial of degree 8.', '\\dots eight bytes.']
B
null
Document 1::: Binary field GF(2) (also denoted F 2 {\displaystyle \mathbb {F} _{2}} , Z/2Z or Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ) is the finite field of two elements (GF is the initialism of Galois field, another name for finite fields). Notations Z2 and Z 2 {\displaystyle \mathbb {Z} _{2}} may be encountered although they can be confused with the notation of 2-adic integers. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively 0 and 1, as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values true and false. It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Document 2::: Primitive element (finite field) In field theory, a primitive element of a finite field GF(q) is a generator of the multiplicative group of the field. In other words, α ∈ GF(q) is called a primitive element if it is a primitive (q − 1)th root of unity in GF(q); this means that each non-zero element of GF(q) can be written as αi for some integer i. If q is a prime number, the elements of GF(q) can be identified with the integers modulo q. In this case, a primitive element is also called a primitive root modulo q. For example, 2 is a primitive element of the field GF(3) and GF(5), but not of GF(7) since it generates the cyclic subgroup {2, 4, 1} of order 3; however, 3 is a primitive element of GF(7). The minimal polynomial of a primitive element is a primitive polynomial. Document 3::: Fermat (computer algebra system) To implement (most) finite fields, the user finds an irreducible monic polynomial in a symbolic variable, say p ( t 1 ) , {\displaystyle p(t_{1}),} and commands Fermat to mod out by it. This may be continued recursively, q ( t 2 , t 1 ) , {\displaystyle q(t_{2},t_{1}),} etc. Low level data structures are set up to facilitate arithmetic and gcd over this newly created ground field. Two special fields, G F ( 2 8 ) {\displaystyle GF(2^{8})} and G F ( 2 16 ) , {\displaystyle GF(2^{16}),} are more efficiently implemented at the bit level. Document 4::: Dual basis in a field extension Consider two bases for elements in a finite field, GF(pm): B 1 = α 0 , α 1 , … , α m − 1 {\displaystyle B_{1}={\alpha _{0},\alpha _{1},\ldots ,\alpha _{m-1}}} and B 2 = γ 0 , γ 1 , … , γ m − 1 {\displaystyle B_{2}={\gamma _{0},\gamma _{1},\ldots ,\gamma _{m-1}}} then B2 can be considered a dual basis of B1 provided Tr ⁡ ( α i ⋅ γ j ) = { 0 , if ⁡ i ≠ j 1 , otherwise {\displaystyle \operatorname {Tr} (\alpha _{i}\cdot \gamma _{j})=\left\{{\begin{matrix}0,&\operatorname {if} \ i\neq j\\1,&\operatorname {otherwise} \end{matrix}}\right.} Here the trace of a value in GF(pm) can be calculated as follows: Tr ⁡ ( β ) = ∑ i = 0 m − 1 β p i {\displaystyle \operatorname {Tr} (\beta )=\sum _{i=0}^{m-1}\beta ^{p^{i}}} Using a dual basis can provide a way to easily communicate between devices that use different bases, rather than having to explicitly convert between bases using the change of bases formula. Furthermore, if a dual basis is implemented then conversion from an element in the original basis to the dual basis can be accomplished with multiplication by the multiplicative identity (usually 1). Document 5::: Galois fields In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod p when p is a prime number.
epfl-collab
Consider $GF(8)$ defined as $\mathbb{Z}_2[X]/(P(X))$ with $P(x) = X^3 + X + 1$. Compute $X^2 \times (X + 1)$ in $\mathbb{Z}_2[X]/(P(X))$
['$X^2+X+1$.', '$X+1$.', '$X^2 + 1$.', '$X^2$.']
A
null
Document 1::: Fermat (computer algebra system) To implement (most) finite fields, the user finds an irreducible monic polynomial in a symbolic variable, say p ( t 1 ) , {\displaystyle p(t_{1}),} and commands Fermat to mod out by it. This may be continued recursively, q ( t 2 , t 1 ) , {\displaystyle q(t_{2},t_{1}),} etc. Low level data structures are set up to facilitate arithmetic and gcd over this newly created ground field. Two special fields, G F ( 2 8 ) {\displaystyle GF(2^{8})} and G F ( 2 16 ) , {\displaystyle GF(2^{16}),} are more efficiently implemented at the bit level. Document 2::: Binary field GF(2) (also denoted F 2 {\displaystyle \mathbb {F} _{2}} , Z/2Z or Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ) is the finite field of two elements (GF is the initialism of Galois field, another name for finite fields). Notations Z2 and Z 2 {\displaystyle \mathbb {Z} _{2}} may be encountered although they can be confused with the notation of 2-adic integers. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively 0 and 1, as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values true and false. It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Document 3::: Number Field Sieve In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than 10100. Heuristically, its complexity for factoring an integer n (consisting of ⌊log2 n⌋ + 1 bits) is of the form exp ⁡ ( ( ( 64 / 9 ) 1 / 3 + o ( 1 ) ) ( log ⁡ n ) 1 / 3 ( log ⁡ log ⁡ n ) 2 / 3 ) = L n {\displaystyle \exp \left(\left((64/9)^{1/3}+o(1)\right)\left(\log n\right)^{1/3}\left(\log \log n\right)^{2/3}\right)=L_{n}\left} in O and L-notations. It is a generalization of the special number field sieve: while the latter can only factor numbers of a certain special form, the general number field sieve can factor any number apart from prime powers (which are trivial to factor by taking roots). Document 4::: Faugère's F4 and F5 algorithms In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel. The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev. Document 5::: Function field sieve Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two. The discrete logarithm problem in a finite field consists of solving the equation a x = b {\displaystyle a^{x}=b} for a , b ∈ F p n {\displaystyle a,b\in \mathbb {F} _{p^{n}}} , p {\displaystyle p} a prime number and n {\displaystyle n} an integer. The function f: F p n → F p n , x ↦ a x {\displaystyle f:\mathbb {F} _{p^{n}}\to \mathbb {F} _{p^{n}},x\mapsto a^{x}} for a fixed a ∈ F p n {\displaystyle a\in \mathbb {F} _{p^{n}}} is a one-way function used in cryptography. Several cryptographic methods are based on the DLP such as the Diffie-Hellman key exchange, the El Gamal cryptosystem and the Digital Signature Algorithm.
epfl-collab
Let $n$ be a positive integer. An element $x \in \mathbb{Z}_n$ is \emph{always} invertible when \dots
['$x$ is even.', '$x$ and $n$ are coprime.', '$x$ and $\\varphi(n)$ are coprime.', '$n$ is prime.']
B
null
Document 1::: Invertible element For example, consider the functions from the integers to the integers. The doubling function x ↦ 2 x {\displaystyle x\mapsto 2x} has infinitely many left inverses under function composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Document 2::: Invertible element An element is invertible under an operation if it has a left inverse and a right inverse. In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, if l and r are respectively a left inverse and a right inverse of x, then l = l ∗ ( x ∗ r ) = ( l ∗ x ) ∗ r = r . {\displaystyle l=l*(x*r)=(l*x)*r=r.} Document 3::: Invertible element If x ∗ y = e , {\displaystyle x*y=e,} where e is an identity element, one says that x is a left inverse of y, and y is a right inverse of x. Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation on nonnegative integers, which has 0 as additive identity, and 0 is the only element that has an additive inverse. This lack of inverses is the main motivation for extending the natural numbers into the integers. An element can have several left inverses and several right inverses, even when the operation is total and associative. Document 4::: Invertible element The inverse of an invertible element is its unique left or right inverse. If the operation is denoted as an addition, the inverse, or additive inverse, of an element x is denoted − x . {\displaystyle -x.} Document 5::: Invertible element Otherwise, the inverse of x is generally denoted x − 1 , {\displaystyle x^{-1},} or, in the case of a commutative multiplication 1 x . {\textstyle {\frac {1}{x}}.} When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as in x ∗ − 1 .
epfl-collab
Which of these attacks applies to the Diffie-Hellman key exchange when the channel cannot be authenticated?
['Birthday Paradox', 'Attack on low exponents', 'Man-in-the-middle attack', 'Meet-in-the-middle attack']
C
null
Document 1::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Document 2::: Small subgroup confinement attack In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group. Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE. Document 3::: Computational Diffie–Hellman assumption The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key. Document 4::: Diffie–Hellman problem The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken. Document 5::: Length extension attack In cryptography and computer security, a length extension attack is a type of attack where an attacker can use Hash(message1) and the length of message1 to calculate Hash(message1 ‖ message2) for an attacker-controlled message2, without needing to know the content of message1. This is problematic when the hash is used as a message authentication code with construction Hash(secret ‖ message), and message and the length of secret is known, because an attacker can include extra information at the end of the message and produce a valid hash without knowing the secret. Algorithms like MD5, SHA-1 and most of SHA-2 that are based on the Merkle–Damgård construction are susceptible to this kind of attack. Truncated versions of SHA-2, including SHA-384 and SHA-512/256 are not susceptible, nor is the SHA-3 algorithm.HMAC also uses a different construction and so is not vulnerable to length extension attacks.
epfl-collab
Which of the following is an acceptable commitment scheme, i.e., one that verifies the hiding and binding property (for a well chosen primitive and suitable $x$ and $r$):
['$Commit(x;r) = H(r\\|x)$, where $H$ is a hash function and $\\|$ denotes the concatenation.', '$Commit(x;r) = Enc_r(x)$, where $Enc_r$ is a symmetric encryption scheme with key $r$.', '$Commit(x;r) = H(x)$, where $H$ is a hash function.', '$Commit(x;r) = x \\oplus r$, where $\\oplus$ is the bitwise xor operation.']
A
null
Document 1::: Sub-group hiding The sub-group hiding assumption is a computational hardness assumption used in elliptic curve cryptography and pairing-based cryptography. It was first introduced in to build a 2-DNF homomorphic encryption scheme. Document 2::: Zero-knowledge proofs A zero-knowledge proof of some statement must satisfy three properties: Completeness: if the statement is true, an honest verifier (that is, one following the protocol properly) will be convinced of this fact by an honest prover. Soundness: if the statement is false, no cheating prover can convince an honest verifier that it is true, except with some small probability. Zero-knowledge: if the statement is true, no verifier learns anything other than the fact that the statement is true. In other words, just knowing the statement (not the secret) is sufficient to imagine a scenario showing that the prover knows the secret. Document 3::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 4::: Three-phase commit protocol In computer networking and databases, the three-phase commit protocol (3PC) is a distributed algorithm which lets all nodes in a distributed system agree to commit a transaction. It is a more failure-resilient refinement of the two-phase commit protocol (2PC). Document 5::: Zero-knowledge proof In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, while avoiding conveying to the verifier any information beyond the mere fact of the statement's truth. The intuition underlying zero-knowledge proofs is that it is trivial to prove the possession of certain information by simply revealing it; the challenge is to prove this possession without revealing the information, or any aspect of it whatsoever.In light of the fact that one should be able to generate a proof of some statement only when in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to third parties. In the plain model, nontrivial zero-knowledge proofs (i.e., those for languages outside of BPP) demand interaction between the prover and the verifier. This interaction usually entails the selection of one or more random challenges by the verifier; the random origin of these challenges, together with the prover's successful responses to them notwithstanding, jointly convince the verifier that the prover does possess the claimed knowledge.
epfl-collab
A 128-bit key ...
['is too long for any practical application.', 'adresses $n^2$ problem for $n=2^{64}$.', 'has 128 decimal digits.', 'provides reasonable security for at least four decades.']
D
null
Document 1::: LILI-128 LILI-128 is an LFSR based synchronous stream cipher with a 128-bit key. On 13 November 2000, LILI-128 was presented at the NESSIE workshop. It is designed to be simple to implement in both software and hardware. In 2007, LILI-128 was totally broken by using a notebook running MATLAB in 1.61 hours. == References == Document 2::: Security level For example, AES-128 (key size 128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to a RSA using 3072-bit key. In this context, security claim or target security level is the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is considered broken. Document 3::: 40-bit encryption 40-bit encryption refers to a (now broken) key size of forty bits, or five bytes, for symmetric encryption; this represents a relatively low level of security. A forty bit length corresponds to a total of 240 possible keys. Although this is a large number in human terms (about a trillion), it is possible to break this degree of encryption using a moderate amount of computing power in a brute-force attack, i.e., trying out each possible key in turn. Document 4::: 128-bit computing In computer architecture, 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide. Also, 128-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. General home computing and gaming utility emerge at 8-bit (but not at 1-bit or 4-bit) word sizes, as 28=256 words become possible. Document 5::: Bit width For example, computer processors are often designed to process data grouped into words of a given length of bits (8 bit, 16 bit, 32 bit, 64 bit, etc.). The bit-length of each word defines, for one thing, how many memory locations can be independently addressed by the processor. In cryptography, the key size of an algorithm is the bit-length of the keys used by that algorithm, and it is an important factor of an algorithm's strength. == References ==
epfl-collab
Consider a hash function $H$ with $n$ output bits. Tick the \emph{incorrect} assertion.
['It is possible to find an output collision of $H$ with $O(2^{\\frac{n}{2}})$ memory and $O(1)$ running time.', 'Due to birthday paradox, an output collision of $H$ can be found much faster than with running time $2^n$.', 'It is possible to find an output collision of $H$ with $O(1)$ memory and $O(2^{\\frac{n}{2}})$ running time.', 'It is possible to find an output collision of $H$ with $O(2^{\\frac{n}{2}})$ memory and $O(2^{\\frac{n}{2}})$ running time.']
A
null
Document 1::: Minimal perfect hash function In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. Document 2::: Hash chain A hash chain is a successive application of a cryptographic hash function h {\displaystyle h} to a string x {\displaystyle x} . For example, h ( h ( h ( h ( x ) ) ) ) {\displaystyle h(h(h(h(x))))} gives a hash chain of length 4, often denoted h 4 ( x ) {\displaystyle h^{4}(x)} Document 3::: SMASH (hash) Input: 256/512-bit message blocks m 1 , m 2 , . . . , m t {\displaystyle m_{1},m_{2},...,m_{t}} and θ ∈ G F ( 2 n ) {\displaystyle \theta \in GF(2^{n})} h 0 = f ( i v ) ⊕ i v {\displaystyle h_{0}=f(iv)\oplus iv} h i = h ( h i − 1 , m i ) = f ( h i 1 ⊕ m i ) ⊕ m i ⊕ θ m i {\displaystyle h_{i}=h(h_{i-1},m_{i})=f(h_{i_{1}}\oplus m_{i})\oplus m_{i}\oplus \theta m_{i}} h t + 1 = f ( h t ) ⊕ h t {\displaystyle h_{t+1}=f(h_{t})\oplus h_{t}} The function f is a complex compression function consisting of H-Rounds and L-Rounds using S-boxes, linear diffusion and variable rotations, details can be found here Document 4::: Pearson hashing However, using a too simple function, such as T = 255-i, partly defeats the usability as a hash function as anagrams will result in the same hash value; using a too complex function, on the other hand, will affect speed negatively. Using a function rather than a table also allows extending the block size. Such functions naturally have to be bijective, like their table variants. The algorithm can be described by the following pseudocode, which computes the hash of message C using the permutation table T: algorithm pearson hashing is h := 0 for each c in C loop h := T end loop return h The hash variable (h) may be initialized differently, e.g. to the length of the data (C) modulo 256. Document 5::: Secure Hash Algorithms The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including: SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1. SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm.
epfl-collab
Enigma
['has approximately $2^{256}$ possible keys', 'was a predecessor of a Turing machine model - a basis of Von Neumann architecture', 'achieves perfect security as was required due to military application', 'follows the Kerkhoffs principle']
D
null
Document 1::: Crypt (Unix) In Unix computing, crypt or enigma is a utility program used for encryption. Due to the ease of breaking it, it is considered to be obsolete. The program is usually used as a filter, and it has traditionally been implemented using a "rotor machine" algorithm based on the Enigma machine. It is considered to be cryptographically far too weak to provide any security against brute-force attacks by modern, commodity personal computers.Some versions of Unix shipped with an even weaker version of the crypt(1) command in order to comply with contemporaneous laws and regulations that limited the exportation of cryptographic software. Some of these were simply implementations of the Caesar cipher (effectively no more secure than ROT13, which is implemented as a Caesar cipher with a well-known key). Document 2::: Cipher In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Document 3::: Rotor cipher machine In cryptography, a rotor machine is an electro-mechanical stream cipher device used for encrypting and decrypting messages. Rotor machines were the cryptographic state-of-the-art for much of the 20th century; they were in widespread use in the 1920s–1970s. The most famous example is the German Enigma machine, the output of which was deciphered by the Allies during World War II, producing intelligence code-named Ultra. Document 4::: Cipher system Cipher System is a Swedish melodic death metal band from Tjörn. The band formed under the name Eternal Grief in 1995. They changed their name in 2001 to Cipher System. The band released their first album Central Tunnel Eight by Lifeforce Records on 2 November 2004. Document 5::: Square (cipher) In cryptography, Square (sometimes written SQUARE) is a block cipher invented by Joan Daemen and Vincent Rijmen. The design, published in 1997, is a forerunner to Rijndael, which has been adopted as the Advanced Encryption Standard. Square was introduced together with a new form of cryptanalysis discovered by Lars Knudsen, called the "Square attack". The structure of Square is a substitution–permutation network with eight rounds, operating on 128-bit blocks and using a 128-bit key. Square is not patented.
epfl-collab
Tick the \emph{incorrect} assertion. In RSA with public key $(e,N)$ and private key $(d,N)$ \ldots
['$e=3$ can be a valid choice of the public key-exponent.', 'we can recover $d$ if we can compute square root modulo $N$ efficiently.', 'we must have that $\\gcd(e,d) = 1$ to be able to decrypt unambiguously.', 'to decrypt a ciphertext $c$, we compute $c^d \\bmod{N}$.']
C
null
Document 1::: Decision Linear assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham.Informally the DLIN assumption states that given ( u , v , h , u x , v y ) {\displaystyle (u,\,v,\,h,\,u^{x},\,v^{y})} , with u , v , h {\displaystyle u,\,v,\,h} random group elements and x , y {\displaystyle x,\,y} random exponents, it is hard to distinguish h x + y {\displaystyle h^{x+y}} from an independent random group element η {\displaystyle \eta } . Document 2::: Elliptic-curve Diffie-Hellman Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Document 3::: Computational Diffie–Hellman assumption The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key. Document 4::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,} Document 5::: Blinding (cryptography) Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks. For example, in RSA blinding involves computing the blinding operation E(x) = (xr)e mod N, where r is a random integer between 1 and N and relatively prime to N (i.e. gcd(r, N) = 1), x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function f(z) = zd mod N is applied thus giving f(E(x)) = (xr)ed mod N = xr mod N. Finally it is unblinded using the function D(z) = zr−1 mod N. Multiplying xr mod N by r−1 mod N yields x, as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives.
epfl-collab
Tick the \emph{false} assertion concerning WEP
['In WEP, encryption is based on a block cipher.', 'WPA-TKIP was designed as a quick fix for WEP.', 'In WEP, encryption is based on RC4.', 'In WEP, IVs repeat themselves too often.']
A
null
Document 1::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 2::: Electromagnetic attack In cryptography, electromagnetic attacks are side-channel attacks performed by measuring the electromagnetic radiation emitted from a device and performing signal analysis on it. These attacks are a more specific type of what is sometimes referred to as Van Eck phreaking, with the intention to capture encryption keys. Electromagnetic attacks are typically non-invasive and passive, meaning that these attacks are able to be performed by observing the normal functioning of the target device without causing physical damage. However, an attacker may get a better signal with less noise by depackaging the chip and collecting the signal closer to the source. Document 3::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values. Document 4::: Common Weakness Enumeration The Common Weakness Enumeration (CWE) is a category system for hardware and software weaknesses and vulnerabilities. It is sustained by a community project with the goals of understanding flaws in software and hardware and creating automated tools that can be used to identify, fix, and prevent those flaws. The project is sponsored by the National Cybersecurity FFRDC, which is operated by The MITRE Corporation, with support from US-CERT and the National Cyber Security Division of the U.S. Department of Homeland Security.Version 4.10 of the CWE standard was released in July 2021.CWE has over 600 categories, including classes for buffer overflows, path/directory tree traversal errors, race conditions, cross-site scripting, hard-coded passwords, and insecure random numbers. Document 5::: Simulated Electronic Launch Minuteman Simulated Electronic Launch Minuteman (SELM) is a method used by the United States Air Force to verify the reliability of the LGM-30 Minuteman intercontinental ballistic missile. SELM replaces key components at the Launch Control Center to allow a physical "keyturn" by missile combat crew members. This test allows end-to-end verification in the ICBM launch process.
epfl-collab
Let $n$ be an integer. Which of the following is \emph{not} a group in the general case?
['$(\\mathbf{Z}_n,+ \\pmod{n})$', '$(\\mathbf{Q}\\setminus \\{0\\},\\times)$', '$(\\mathbf{Z}_n,\\times \\pmod{n})$', '$(\\mathbf{R},+)$']
C
null
Document 1::: N-group (finite group theory) In mathematical finite group theory, an N-group is a group all of whose local subgroups (that is, the normalizers of nontrivial p-subgroups) are solvable groups. The non-solvable ones were classified by Thompson during his work on finding all the minimal finite simple groups. Document 2::: Multiplicative group of integers modulo n In modular arithmetic, the integers coprime (relatively prime) to n from the set { 0 , 1 , … , n − 1 } {\displaystyle \{0,1,\dots ,n-1\}} of n non-negative integers form a group under multiplication modulo n, called the multiplicative group of integers modulo n. Equivalently, the elements of this group can be thought of as the congruence classes, also known as residues modulo n, that are coprime to n. Hence another name is the group of primitive residue classes modulo n. In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo n. Here units refers to elements with a multiplicative inverse, which, in this ring, are exactly those coprime to n. This quotient group, usually denoted ( Z / n Z ) × {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }} , is fundamental in number theory. It is used in cryptography, integer factorization, and primality testing. It is an abelian, finite group whose order is given by Euler's totient function: | ( Z / n Z ) × | = φ ( n ) . {\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).} For prime n the group is cyclic, and in general the structure is easy to describe, but no simple general formula for finding generators is known. Document 3::: Negative integer An integer may be regarded as a real number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, 5+1/2, and √2 are not.The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, (rational) integers are algebraic integers that are also rational numbers. Document 4::: Complemented group In (Costantini & Zacher 2004) it is shown that every finite simple group is a complemented group. Note that in the classification of finite simple groups, K-group is more used to mean a group whose proper subgroups only have composition factors amongst the known finite simple groups. An example of a group that is not complemented (in either sense) is the cyclic group of order p2, where p is a prime number. This group only has one nontrivial subgroup H, the cyclic group of order p, so there can be no other subgroup L to be the complement of H. Document 5::: Nonabelian group In mathematics, and specifically in group theory, a non-abelian group, sometimes called a non-commutative group, is a group (G, ∗) in which there exists at least one pair of elements a and b of G, such that a ∗ b ≠ b ∗ a. This class of groups contrasts with the abelian groups. (In an abelian group, all pairs of group elements commute). Non-abelian groups are pervasive in mathematics and physics. One of the simplest examples of a non-abelian group is the dihedral group of order 6.
epfl-collab
Tick the \textbf{true} statement.
['For all $n \\geq 2$ and all $x \\in \\mathbb{Z}_n$, $x$ is invertible if and only if $x$ divides $n$.', 'For all $n \\geq 2$, $\\mathbb{Z}_n^*$ has order of $n-1$.', 'If $x \\in \\mathbb{Z}_n^*$ has an order of $m$, then $x^i \\equiv x^{i \\pmod{m}} \\pmod{n} $ for all $i\\in \\mathbb{Z}$.', 'For all $x \\in \\mathbb{Z}_n$, we have $x^{\\varphi(n)}\\equiv 1 \\pmod{n}$.']
C
null
Document 1::: Vector clocks Each time a process experiences an internal event, it increments its own logical clock in the vector by one. For instance, upon an event at process i {\displaystyle i} , it updates V C i ← V C i + 1 {\displaystyle VC_{i}\leftarrow VC_{i}+1} . Each time a process sends a message, it increments its own logical clock in the vector by one (as in the bullet above, but not twice for the same event) then it pairs the message with a copy of its own vector and finally sends the pair. Each time a process receives a message-vector clock pair, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the value in its own vector clock and the value in the vector in the received pair (for every element). For example, if process P i {\displaystyle P_{i}} receives a message ( m , V C j ) {\displaystyle (m,VC_{j})} from P j {\displaystyle P_{j}} , it first increments its own logical clock in the vector by one V C i ← V C i + 1 {\displaystyle VC_{i}\leftarrow VC_{i}+1} and then updates its entire vector by setting V C i ← max ( V C i , V C j ) , ∀ k {\displaystyle VC_{i}\leftarrow \max(VC_{i},VC_{j}),\forall k} . Document 2::: Quantified Boolean formula problem In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic (also known as Second-order propositional logic) where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT). Document 3::: Boolean flag A Boolean flag, truth bit or truth flag in computer science is a Boolean value represented as one or more bits, which encodes a state variable with two possible values. Document 4::: HBG (time signal) At the beginning of each second (with the exception of the 59th), the carrier signal was interrupted for a period of 0.1 s or 0.2 s, which corresponded to a binary "0" or "1". The transmission of the minute, hour, calendar date, day of the week, month and current year was achieved by means of a BCD code identical to that of DCF77. Like DCF77, the carrier was not interrupted during the last second of each minute. Document 5::: Block floating point Block floating point (BFP) is a method used to provide an arithmetic approaching floating point while using a fixed-point processor. BFP assigns a group of significands (the non-exponent part of the floating-point number) to a single exponent, rather than single significand being assigned its own exponent. BFP can be advantageous to limit space use in hardware to perform the same functions as floating-point algorithms, by reusing the exponent; some operations over multiple values between blocks can also be done with a reduced amount of computation.The common exponent is found by data with the largest amplitude in the block. To find the value of the exponent, the number of leading zeros must be found (count leading zeros). For this to be done, the number of left shifts needed for the data must be normalized to the dynamic range of the processor used. Some processors have means to find this out themselves, such as exponent detection and normalization instructions.Block floating-point algorithms were extensively studied by James Hardy Wilkinson.BFP can be recreated in software for smaller performance gains.
epfl-collab
What is $\varphi(48)$?
['$16$', '$30$', '$24$', '$47$']
A
null
Document 1::: Abell 48 Abell 48 is a planetary nebula likely located around 14,000 light years away in the constellation of Aquila. It is noteworthy among planetary nebulae for hosting a rare WN-type Wolf-Rayet-type central star, a -type star, which was once thought to be a bona-fide Wolf-Rayet star, and received the name WR 120–6. The nebula is made up of two rings surrounding the central star, and is heavily reddened, with an E(B-V) value of 2.14 and a visual extinction of 6.634 magnitudes, which is why it appears so dim. Document 2::: 48-bit computing In computer architecture, 48-bit integers can represent 281,474,976,710,656 (248 or 2.814749767×1014) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (248 − 1) or a signed two's complement range of -140,737,488,355,328 (-247) through 140,737,488,355,327 (247 − 1). A 48-bit memory address can directly address every byte of 256 terabytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets) in width. Examples include 48-bit CPU and ALU architectures that are based on registers, address buses, or data buses of that size. Document 3::: WASP-48 WASP-48 is a subgiant star about 1400 light-years away. The star is likely older than Sun and slightly depleted in heavy elements. It shows an infrared excess noise of unknown origin, yet has no detectable ultraviolet emissions associated with the starspot activity. The discrepancy may be due to large interstellar absorption of light in interstellar medium for WASP-48. The measurements are compounded by the emission from eclipsing contact binary NSVS-3071474 projected on sky plane nearby, although no true stellar companions were detected by survey in 2015.The star is rotating rapidly, being spun up by the tides raised by the giant planet on close orbit. Document 4::: Avrami equation Akad. Nauk. SSSR., 1937, 3, 355). Document 5::: WASP-48b WASP-48b is an extrasolar planet orbiting the star WASP-48 in the constellation Cygnus. The planet was detected using the transit method by the SuperWASP team, which published its discovery in 2011. It orbits its host star in just 2.14 days with a semi-major axis of 0.034 AU and has an equilibrium temperature of 1956±54 K. The dayside temperature was measured to be around 2300 K in 2018.The planetary atmosphere transmission spectrum is gray and featureless, having no noticeable Rayleigh scattering.
epfl-collab
Tick the true assertion.
['A dictionary attack requires less memory than a time-memory tradeoff.', 'AES is the ancestor of DES.', 'Double-DES succumbs under a Meet-in-the-Middle attack.', 'IDEA has the same round functions as DES.']
C
null
Document 1::: Assertion (software development) In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Document 2::: Test assertion In computer software testing, a test assertion is an expression which encapsulates some testable logic specified about a target under test. The expression is formally presented as an assertion, along with some form of identifier, to help testers and engineers ensure that tests of the target relate properly and clearly to the corresponding specified statements about the target. Usually the logic for each test assertion is limited to one single aspect specified. A test assertion may include prerequisites which must be true for the test assertion to be valid. Document 3::: False (logic) In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥ {\displaystyle \bot } .Another approach is used for several formal theories (e.g., intuitionistic propositional calculus), where a propositional constant (i.e. a nullary connective), ⊥ {\displaystyle \bot } , is introduced, the truth value of which being always false in the sense above. It can be treated as an absurd proposition, and is often called absurdity. Document 4::: Mathematical truth Truth is the property of being in accord with fact or reality. In everyday language, truth is typically ascribed to things that aim to represent reality or otherwise correspond to it, such as beliefs, propositions, and declarative sentences.Truth is usually held to be the opposite of falsehood. The concept of truth is discussed and debated in various contexts, including philosophy, art, theology, law, and science. Most human activities depend upon the concept, where its nature as a concept is assumed rather than being a subject of discussion, including journalism and everyday life. Document 5::: Judgment (mathematical logic) In mathematical logic, a judgment (or judgement) or assertion is a statement or enunciation in a metalanguage. For example, typical judgments in first-order logic would be that a string is a well-formed formula, or that a proposition is true. Similarly, a judgment may assert the occurrence of a free variable in an expression of the object language, or the provability of a proposition. In general, a judgment may be any inductively definable assertion in the metatheory.
epfl-collab
Tick the \emph{correct} assertion.
['The Keccak hash function is based on the Merkle-Damg{\\aa}rd construction.', 'MD5 is using a compression function based on the Davies-Meyer scheme.', 'Plain CBCMAC is resistant to forgery attacks.', 'GCM is an efficient MAC based on the CBC mode.']
B
null
Document 1::: Assertion (software development) In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Document 2::: Test assertion In computer software testing, a test assertion is an expression which encapsulates some testable logic specified about a target under test. The expression is formally presented as an assertion, along with some form of identifier, to help testers and engineers ensure that tests of the target relate properly and clearly to the corresponding specified statements about the target. Usually the logic for each test assertion is limited to one single aspect specified. A test assertion may include prerequisites which must be true for the test assertion to be valid. Document 3::: Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (German: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems. Document 4::: Correctness (computer science) For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number—it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory. A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally. Document 5::: Compiler correctness In computing, compiler correctness is the branch of computer science that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler.
epfl-collab
The Time-Memory Tradeoff Attack ...
['can be combined with birthday paradox to find the order of the group in RSA efficiently.', 'is useful for finding a preimage within complexity $O\\big(\\big({\\frac{2}{3}}\\big)^N\\big).$', 'is a dedicated method which works only on SHA1.', 'is useful for finding a preimage within complexity $O(N^{\\frac{2}{3}}).$']
D
null
Document 1::: Time/memory/data tradeoff attack A time/memory/data tradeoff attack is a type of cryptographic attack where an attacker tries to achieve a situation similar to the space–time tradeoff but with the additional parameter of data, representing the amount of data available to the attacker. An attacker balances or reduces one or two of those parameters in favor of the other one or two. This type of attack is very difficult, so most of the ciphers and encryption schemes in use were not designed to resist it. Document 2::: Timing attacks In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than using cryptanalysis of known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.Information can leak from a system through measurement of the time it takes to respond to certain queries. Document 3::: Time–memory tradeoff A space–time trade-off, also known as time–memory trade-off or the algorithmic space-time continuum in computer science is a case where an algorithm or program trades increased space usage with decreased time. Here, space refers to the data storage consumed in performing a given task (RAM, HDD, etc), and time refers to the time consumed in performing a given task (computation time or response time). The utility of a given space–time tradeoff is affected by related fixed and variable costs (of, e.g., CPU speed, storage space), and is subject to diminishing returns. Document 4::: Timing attacks How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time. Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally with compiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code. Document 5::: Rainbow tables A common defense against this attack is to compute the hashes using a key derivation function that adds a "salt" to each password before hashing it, with different passwords receiving different salts, which are stored in plain text along with the hash. Rainbow tables are a practical example of a space–time tradeoff: they use less computer processing time and more storage than a brute-force attack which calculates a hash on every attempt, but more processing time and less storage than a simple table that stores the hash of every possible password. Rainbow tables were invented by Philippe Oechslin as an application of an earlier, simpler algorithm by Martin Hellman.
epfl-collab
Let $f: \mathbb{Z}_{m n} \rightarrow \mathbb{Z}_m \times \mathbb{Z}_n$ be defined by $f (x) = (x \bmod m,x \bmod n)$. Then $f$ is a ring isomorphism between $\mathbb{Z}_{180}$ and:
['$\\mathbb{Z}_{4} \\times \\mathbb{Z}_{45}$.', '$\\mathbb{Z}_{6} \\times \\mathbb{Z}_{30}$.', '$\\mathbb{Z}_{2} \\times \\mathbb{Z}_{90}$.', '$\\mathbb{Z}_{10} \\times \\mathbb{Z}_{18}$.']
A
null
Document 1::: First ring isomorphism theorem In mathematics, specifically abstract algebra, the isomorphism theorems (also known as Noether's isomorphism theorems) are theorems that describe the relationship between quotients, homomorphisms, and subobjects. Versions of the theorems exist for groups, rings, vector spaces, modules, Lie algebras, and various other algebraic structures. In universal algebra, the isomorphism theorems can be generalized to the context of algebras and congruences. Document 2::: Rng homomorphism In ring theory, a branch of abstract algebra, a ring homomorphism is a structure-preserving function between two rings. More explicitly, if R and S are rings, then a ring homomorphism is a function f: R → S such that f is: addition preserving: f ( a + b ) = f ( a ) + f ( b ) {\displaystyle f(a+b)=f(a)+f(b)} for all a and b in R,multiplication preserving: f ( a b ) = f ( a ) f ( b ) {\displaystyle f(ab)=f(a)f(b)} for all a and b in R,and unit (multiplicative identity) preserving: f ( 1 R ) = 1 S {\displaystyle f(1_{R})=1_{S}} .Additive inverses and the additive identity are part of the structure too, but it is not necessary to require explicitly that they too are respected, because these conditions are consequences of the three conditions above. If in addition f is a bijection, then its inverse f−1 is also a ring homomorphism. In this case, f is called a ring isomorphism, and the rings R and S are called isomorphic. Document 3::: Surjective homomorphism A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism. Document 4::: Nondegenerate quadratic form Let R be a commutative ring, M be an R-module, and b: M × M → R be an R-bilinear form. A mapping q: M → R: v ↦ b(v, v) is the associated quadratic form of b, and B: M × M → R: (u, v) ↦ q(u + v) − q(u) − q(v) is the polar form of q. A quadratic form q: M → R may be characterized in the following equivalent ways: There exists an R-bilinear form b: M × M → R such that q(v) is the associated quadratic form. q(av) = a2q(v) for all a ∈ R and v ∈ M, and the polar form of q is R-bilinear. Document 5::: Signed-digit representation Every digit set D {\displaystyle {\mathcal {D}}} has a dual digit set D op {\displaystyle {\mathcal {D}}^{\operatorname {op} }} given by the inverse order of the digits with an isomorphism g: D → D op {\displaystyle g:{\mathcal {D}}\rightarrow {\mathcal {D}}^{\operatorname {op} }} defined by − f D = g ∘ f D op {\displaystyle -f_{\mathcal {D}}=g\circ f_{{\mathcal {D}}^{\operatorname {op} }}} . As a result, for any signed-digit representations N {\displaystyle {\mathcal {N}}} of a number system ring N {\displaystyle N} constructed from D {\displaystyle {\mathcal {D}}} with valuation v D: N → N {\displaystyle v_{\mathcal {D}}:{\mathcal {N}}\rightarrow N} , there exists a dual signed-digit representations of N {\displaystyle N} , N op {\displaystyle {\mathcal {N}}^{\operatorname {op} }} , constructed from D op {\displaystyle {\mathcal {D}}^{\operatorname {op} }} with valuation v D op: N op → N {\displaystyle v_{{\mathcal {D}}^{\operatorname {op} }}:{\mathcal {N}}^{\operatorname {op} }\rightarrow N} , and an isomorphism h: N → N op {\displaystyle h:{\mathcal {N}}\rightarrow {\mathcal {N}}^{\operatorname {op} }} defined by − v D = h ∘ v D op {\displaystyle -v_{\mathcal {D}}=h\circ v_{{\mathcal {D}}^{\operatorname {op} }}} , where − {\displaystyle -} is the additive inverse operator of N {\displaystyle N} . The digit set for balanced form representations is self-dual.
epfl-collab
A Carmichael number $n$ ...
['is a prime number.', 'will be considered as a prime by the Miller-Rabin algorithm.', "will always pass Fermat's test for any $0 < b < n$.", 'verifies that $\\forall b$, $\\mathsf{gcd}(b,n)=1$ implies that $b^{n-1} \\equiv 1 \\ \\pmod n $.']
D
null
Document 1::: Carmichael number In number theory, a Carmichael number is a composite number n {\displaystyle n} , which in modular arithmetic satisfies the congruence relation: b n ≡ b ( mod n ) {\displaystyle b^{n}\equiv b{\pmod {n}}} for all integers b {\displaystyle b} . The relation may also be expressed in the form: b n − 1 ≡ 1 ( mod n ) {\displaystyle b^{n-1}\equiv 1{\pmod {n}}} .for all integers b {\displaystyle b} which are relatively prime to n {\displaystyle n} . Carmichael numbers are named after American mathematician Robert Carmichael, the term having been introduced by Nicolaas Beeger in 1950 (Øystein Ore had referred to them in 1948 as numbers with the "Fermat property", or "F numbers" for short). Document 2::: Lucas–Carmichael number In mathematics, a Lucas–Carmichael number is a positive composite integer n such that if p is a prime factor of n, then p + 1 is a factor of n + 1; n is odd and square-free.The first condition resembles the Korselt's criterion for Carmichael numbers, where -1 is replaced with +1. The second condition eliminates from consideration some trivial cases like cubes of prime numbers, such as 8 or 27, which otherwise would be Lucas–Carmichael numbers (since n3 + 1 = (n + 1)(n2 − n + 1) is always divisible by n + 1). They are named after Édouard Lucas and Robert Carmichael. Document 3::: Reduced totient In number theory, a branch of mathematics, the Carmichael function λ(n) of a positive integer n is the smallest positive integer m such that a m ≡ 1 ( mod n ) {\displaystyle a^{m}\equiv 1{\pmod {n}}} holds for every integer a coprime to n. In algebraic terms, λ(n) is the exponent of the multiplicative group of integers modulo n. The Carmichael function is named after the American mathematician Robert Carmichael who defined it in 1910. It is also known as Carmichael's λ function, the reduced totient function, and the least universal exponent function. The following table compares the first 36 values of λ(n) (sequence A002322 in the OEIS) with Euler's totient function φ (in bold if they are different; the ns such that they are different are listed in OEIS: A033949). Document 4::: Carmichael number They are infinite in number. They constitute the comparatively rare instances where the strict converse of Fermat's Little Theorem does not hold. This fact precludes the use of that theorem as an absolute test of primality.The Carmichael numbers form the subset K1 of the Knödel numbers. Document 5::: Knödel number In number theory, an n-Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies i m − n ≡ 1 ( mod m ) {\displaystyle i^{m-n}\equiv 1{\pmod {m}}} . The concept is named after Walter Knödel.The set of all n-Knödel numbers is denoted Kn. The special case K1 is the Carmichael numbers. There are infinitely many n-Knödel numbers for a given n. Due to Euler's theorem every composite number m is an n-Knödel number for n = m − φ ( m ) {\displaystyle n=m-\varphi (m)} where φ {\displaystyle \varphi } is Euler's totient function.
epfl-collab
Which symmetric key primitive is used in WPA2 encryption?
['MD5 OFB Mode', 'RC4 CBC Mode', 'AES CCM Mode', 'KASUMI ECB Mode']
C
null
Document 1::: KRACK The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable. The vulnerability affects all major software platforms, including Microsoft Windows, macOS, iOS, Android, Linux, OpenBSD and others.The widely used open-source implementation wpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zeros encryption key, effectively nullifying WPA2 protection in a man-in-the-middle attack. Version 2.7 fixed this vulnerability.The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept sent and received data. Document 2::: Cellular Message Encryption Algorithm It is byte-oriented, with variable block size, typically 2 to 6 bytes. The key size is only 64 bits. Both of these are unusually small for a modern cipher. Document 3::: Cryptographic primitive Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions. Document 4::: Two-square cipher The Two-square cipher, also called double Playfair, is a manual symmetric encryption technique. It was developed to ease the cumbersome nature of the large encryption/decryption matrix used in the four-square cipher while still being slightly stronger than the single-square Playfair cipher. The technique encrypts pairs of letters (digraphs), and thus falls into a category of ciphers known as polygraphic substitution ciphers. This adds significant strength to the encryption when compared with monographic substitution ciphers, which operate on single characters. The use of digraphs makes the two-square technique less susceptible to frequency analysis attacks, as the analysis must be done on 676 possible digraphs rather than just 26 for monographic substitution. The frequency analysis of digraphs is possible, but considerably more difficult, and it generally requires a much larger ciphertext in order to be useful. Document 5::: KCipher-2 KCipher-2 is a stream cipher jointly developed by Kyushu University and Japanese telecommunications company KDDI. It is standardized as ISO/IEC 18033–4, and is on the list of recommended ciphers published by the Japanese Cryptography Research and Evaluation Committees (CRYPTREC). It has a key length of 128 bits, and can encrypt and decrypt around seven to ten times faster than the Advanced Encryption Standard (AES) algorithm.
epfl-collab
Let $n$ be an integer. What is the cardinality of $\mathbf{Z}^*_n$?
['$n$', '$\\varphi(n-1)$', '$n-1$', '$\\varphi(n)$']
D
null
Document 1::: Euler's formula For complex z Here, n is restricted to positive integers, so there is no question about what the power with exponent n means. Document 2::: Finite index More generally, | Z: n Z | = n {\displaystyle |\mathbb {Z} :n\mathbb {Z} |=n} for any positive integer n. When G is finite, the formula may be written as | G: H | = | G | / | H | {\displaystyle |G:H|=|G|/|H|} , and it implies Lagrange's theorem that | H | {\displaystyle |H|} divides | G | {\displaystyle |G|} . When G is infinite, | G: H | {\displaystyle |G:H|} is a nonzero cardinal number that may be finite or infinite. For example, | Z: 2 Z | = 2 {\displaystyle |\mathbb {Z} :2\mathbb {Z} |=2} , but | R: Z | {\displaystyle |\mathbb {R} :\mathbb {Z} |} is infinite. If N is a normal subgroup of G, then | G: N | {\displaystyle |G:N|} is equal to the order of the quotient group G / N {\displaystyle G/N} , since the underlying set of G / N {\displaystyle G/N} is the set of cosets of N in G. Document 3::: Zero-sum problem In number theory, zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group. Concretely, given a finite abelian group G and a positive integer n, one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0. The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv. They proved that for the group Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } of integers modulo n, Explicitly this says that any multiset of 2n − 1 integers has a subset of size n the sum of whose elements is a multiple of n, but that the same is not true of multisets of size 2n − 2. Document 4::: Multiplicative group of integers modulo n In modular arithmetic, the integers coprime (relatively prime) to n from the set { 0 , 1 , … , n − 1 } {\displaystyle \{0,1,\dots ,n-1\}} of n non-negative integers form a group under multiplication modulo n, called the multiplicative group of integers modulo n. Equivalently, the elements of this group can be thought of as the congruence classes, also known as residues modulo n, that are coprime to n. Hence another name is the group of primitive residue classes modulo n. In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo n. Here units refers to elements with a multiplicative inverse, which, in this ring, are exactly those coprime to n. This quotient group, usually denoted ( Z / n Z ) × {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }} , is fundamental in number theory. It is used in cryptography, integer factorization, and primality testing. It is an abelian, finite group whose order is given by Euler's totient function: | ( Z / n Z ) × | = φ ( n ) . {\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).} For prime n the group is cyclic, and in general the structure is easy to describe, but no simple general formula for finding generators is known. Document 5::: Dedekind number In mathematics, the Dedekind numbers are a rapidly growing sequence of integers named after Richard Dedekind, who defined them in 1897. The Dedekind number M(n) is the number of monotone boolean functions of n variables. Equivalently, it is the number of antichains of subsets of an n-element set, the number of elements in a free distributive lattice with n generators, and one more than the number of abstract simplicial complexes on a set with n elements. Accurate asymptotic estimates of M(n) and an exact expression as a summation are known. However Dedekind's problem of computing the values of M(n) remains difficult: no closed-form expression for M(n) is known, and exact values of M(n) have been found only for n ≤ 9 (sequence A000372 in the OEIS).
epfl-collab
Let $n$ be any positive integer. Three of the following assertions are equivalent. Tick the remaining one.
['$\\mathbb{Z}_n$ is a field.', '$n$ is a prime power.', 'Any element $x \\in \\mathbb{Z}_n \\backslash \\{0\\}$ is invertible.', '$\\varphi(n)=n-1 $, where $\\varphi$ denotes the Euler totient function.']
B
null
Document 1::: Idoneal number A positive integer n is idoneal if and only if it cannot be written as ab + bc + ac for distinct positive integers a, b, and c.It is sufficient to consider the set { n + k2 | 3 . k2 ≤ n ∧ gcd (n, k) = 1 }; if all these numbers are of the form p, p2, 2 · p or 2s for some integer s, where p is a prime, then n is idoneal. Document 2::: N conjecture In number theory the n conjecture is a conjecture stated by Browkin & Brzeziński (1994) as a generalization of the abc conjecture to more than three integers. Document 3::: Davenport–Erdős theorem In number theory, the Davenport–Erdős theorem states that, for sets of multiples of integers, several different notions of density are equivalent.Let A = a 1 , a 2 , … {\displaystyle A=a_{1},a_{2},\dots } be a sequence of positive integers. Then the multiples of A {\displaystyle A} are another set M ( A ) {\displaystyle M(A)} that can be defined as the set M ( A ) = { k a ∣ k ∈ N , a ∈ A } {\displaystyle M(A)=\{ka\mid k\in \mathbb {N} ,a\in A\}} of numbers formed by multiplying members of A {\displaystyle A} by arbitrary positive integers.According to the Davenport–Erdős theorem, for a set M ( A ) {\displaystyle M(A)} , the following notions of density are equivalent, in the sense that they all produce the same number as each other for the density of M ( A ) {\displaystyle M(A)}: The lower natural density, the inferior limit as n {\displaystyle n} goes to infinity of the proportion of members of M ( A ) {\displaystyle M(A)} in the interval {\displaystyle } . The logarithmic density or multiplicative density, the weighted proportion of members of M ( A ) {\displaystyle M(A)} in the interval {\displaystyle } , again in the limit, where the weight of an element a {\displaystyle a} is 1 / a {\displaystyle 1/a} . Document 4::: On Numbers and Games In the Zeroth Part, Conway provides axioms for arithmetic: addition, subtraction, multiplication, division and inequality. This allows an axiomatic construction of numbers and ordinal arithmetic, namely, the integers, reals, the countable infinity, and entire towers of infinite ordinals. The object to which these axioms apply takes the form {L|R}, which can be interpreted as a specialized kind of set; a kind of two-sided set. By insisting that L Document 5::: Factorial function In mathematics, the factorial of a non-negative integer n {\displaystyle n} , denoted by n ! {\displaystyle n!} , is the product of all positive integers less than or equal to n {\displaystyle n} . The factorial of n {\displaystyle n} also equals the product of n {\displaystyle n} with the next smaller factorial: For example, The value of 0!
epfl-collab
Birthday attacks \dots
['are equivalent to exhaustive search.', 'imply that a majority of people is born in Spring.', 'are used to break Google Calendars.', 'can be used to find collisions in hash functions.']
D
null
Document 1::: Dot plot (statistics) A dot chart or dot plot is a statistical chart consisting of data points plotted on a fairly simple scale, typically using filled in circles. There are two common, yet very different, versions of the dot chart. The first has been used in hand-drawn (pre-computer era) graphs to depict distributions going back to 1884. The other version is described by William S. Cleveland as an alternative to the bar chart, in which dots are used to depict the quantitative values (e.g. counts) associated with categorical variables. Document 2::: Nine dots problem The nine dots puzzle is a mathematical puzzle whose task is to connect nine squarely arranged points with a pen by four (or fewer) straight lines without lifting the pen. The puzzle has appeared under various other names over the years. Document 3::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 4::: PANDAS Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS) is a controversial hypothetical diagnosis for a subset of children with rapid onset of obsessive-compulsive disorder (OCD) or tic disorders. Symptoms are proposed to be caused by group A streptococcal (GAS), and more specifically, group A beta-hemolytic streptococcal (GABHS) infections. OCD and tic disorders are hypothesized to arise in a subset of children as a result of a post-streptococcal autoimmune process. The proposed link between infection and these disorders is that an autoimmune reaction to infection produces antibodies that interfere with basal ganglia function, causing symptom exacerbations, and this autoimmune response results in a broad range of neuropsychiatric symptoms.The PANDAS hypothesis, first described in 1998, was based on observations in clinical case studies by Susan Swedo et al at the US National Institute of Mental Health and in subsequent clinical trials where children appeared to have dramatic and sudden OCD exacerbations and tic disorders following infections. Document 5::: Talk:Fibonacci sequence With lots. Of. Periods.
epfl-collab
What is the number of secret bits in a WEP key?
['64 or 128 bytes.', '40 or 104 bytes.', '40 or 104 bits.', '64 or 128 bits.']
C
null
Document 1::: 40-bit encryption 40-bit encryption refers to a (now broken) key size of forty bits, or five bytes, for symmetric encryption; this represents a relatively low level of security. A forty bit length corresponds to a total of 240 possible keys. Although this is a large number in human terms (about a trillion), it is possible to break this degree of encryption using a moderate amount of computing power in a brute-force attack, i.e., trying out each possible key in turn. Document 2::: Bit width For example, computer processors are often designed to process data grouped into words of a given length of bits (8 bit, 16 bit, 32 bit, 64 bit, etc.). The bit-length of each word defines, for one thing, how many memory locations can be independently addressed by the processor. In cryptography, the key size of an algorithm is the bit-length of the keys used by that algorithm, and it is an important factor of an algorithm's strength. == References == Document 3::: Cellular Message Encryption Algorithm It is byte-oriented, with variable block size, typically 2 to 6 bytes. The key size is only 64 bits. Both of these are unusually small for a modern cipher. Document 4::: M6 (cipher) Mod 257, information about the secret key itself is revealed. One known plaintext reduces the complexity of a brute force attack to about 235 trial encryptions; "a few dozen" known plaintexts lowers this number to about 231. Due to its simple key schedule, M6 is also vulnerable to a slide attack, which requires more known plaintext but less computation. == References == Document 5::: Completeness (cryptography) In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits. This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way.
epfl-collab
Tick the \emph{incorrect} assertion. In a multiplicative cyclic group $G$ of order $m > 1$ with neutral element $e_G$ \ldots
['the order of every element $x \\in G$ is $m$.', 'there exists $g \\in G$ that generates the whole group.', '$\\lambda = m$, where $\\lambda$ is the exponent of $G$.', 'for any $x \\in G$, we have that $x^m = e_{G}$.']
A
null
Document 1::: Finite cyclic group For any element g in any group G, one can form the subgroup that consists of all its integer powers: ⟨g⟩ = { gk | k ∈ Z }, called the cyclic subgroup generated by g. The order of g is |⟨g⟩|, the number of elements in ⟨g⟩, conventionally abbreviated as |g|, as ord(g), or as o(g). That is, the order of an element is equal to the order of the cyclic subgroup that it generates, A cyclic group is a group which is equal to one of its cyclic subgroups: G = ⟨g⟩ for some element g, called a generator of G. For a finite cyclic group G of order n we have G = {e, g, g2, ... , gn−1}, where e is the identity element and gi = gj whenever i ≡ j (mod n); in particular gn = g0 = e, and g−1 = gn−1. An abstract group defined by this multiplication is often denoted Cn, and we say that G is isomorphic to the standard cyclic group Cn. Such a group is also isomorphic to Z/nZ, the group of integers modulo n with the addition operation, which is the standard cyclic group in additive notation. Document 2::: Elementary group theory In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, a b {\displaystyle ab} instead of a ⋅ b {\displaystyle a\cdot b} . The definition of a group does not require that a ⋅ b = b ⋅ a {\displaystyle a\cdot b=b\cdot a} for all elements a {\displaystyle a} and b {\displaystyle b} in G {\displaystyle G} . If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. Document 3::: Elementary group theory The multiplicative group of the field R {\displaystyle \mathbb {R} } is the group R × {\displaystyle \mathbb {R} ^{\times }} whose underlying set is the set of nonzero real numbers R ∖ { 0 } {\displaystyle \mathbb {R} \smallsetminus \{0\}} and whose operation is multiplication. More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted 0 {\displaystyle 0} , and the inverse of an element x {\displaystyle x} is denoted − x {\displaystyle -x} . Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted 1 {\displaystyle 1} , and the inverse of an element x {\displaystyle x} is denoted x − 1 {\displaystyle x^{-1}} . Document 4::: Computational Diffie–Hellman assumption Consider a cyclic group G of order q. The CDH assumption states that, given ( g , g a , g b ) {\displaystyle (g,g^{a},g^{b})\,} for a randomly chosen generator g and random a , b ∈ { 0 , … , q − 1 } , {\displaystyle a,b\in \{0,\ldots ,q-1\},\,} it is computationally intractable to compute the value g a b . {\displaystyle g^{ab}.\,} Document 5::: Primary cyclic group In mathematics, a primary cyclic group is a group that is both a cyclic group and a p-primary group for some prime number p. That is, it is a cyclic group of order pm, Cpm, for some prime number p, and natural number m. Every finite abelian group G may be written as a finite direct sum of primary cyclic groups, as stated in the fundamental theorem of finite abelian groups: G = ⨁ 1 ≤ i ≤ n C p i m i . {\displaystyle G=\bigoplus _{1\leq i\leq n}\mathrm {C} _{{p_{i}}^{m_{i}}}.} This expression is essentially unique: there is a bijection between the sets of groups in two such expressions, which maps each group to one that is isomorphic.
epfl-collab
Which one of the following notions means that ``the information must be protected against any malicious modification''?
['confidentiality.', 'integrity.', 'reliability.', 'privacy.']
B
null
Document 1::: Clark–Wilson model The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system. The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model uses security labels to grant access to objects via transformation procedures and a restricted interface model. Document 2::: Threat detection The potential violation of security. 8. A set of properties of a specific external entity (which may be either an individual or class of entities) that, in union with a set of properties of a specific internal entity, implies a risk (according to a body of knowledge).g Document 3::: Information hazard An information hazard, or infohazard, is "a risk that arises from the dissemination of (true) information that may cause harm or enable some agent to cause harm", as defined by philosopher Nick Bostrom in 2011, or contained in information sensitivity. One example would be instructions for creating a thermonuclear weapon. Document 4::: Information hazard An information hazard, or infohazard, is "a risk that arises from the dissemination of (true) information that may cause harm or enable some agent to cause harm", as defined by philosopher Nick Bostrom in 2011, or contained in information sensitivity. One example would be instructions for creating a thermonuclear weapon. Document 5::: Threat detection ); malicious actors; errors; failures.Factor analysis of information risk defines threat as: threats are anything (e.g., object, substance, human, etc.) that are capable of acting against an asset in a manner that can result in harm. A tornado is a threat, as is a flood, as is a hacker. The key consideration is that threats apply the force (water, wind, exploit code, etc.) against an asset that can cause a loss event to occur.National Information Assurance Training and Education Center gives a more articulated definition of threat: The means through which the ability or intent of a threat agent to adversely affect an automated system, facility, or operation can be manifest.
epfl-collab
Confidentiality means that:
['the message can be read by anyone.', 'the information must be protected against any malicious modification.', 'the message should make clear who the author is.', 'information should not leak to any unexpected party.']
D
null
Document 1::: Electronic health record confidentiality Electronic health record medical healthcare systems are developing widely. Things are being moved from the manual ways to automation and the patient records and health records are also being recorded electronically. One important aspect of any health record system is to ensure the confidentiality of the patient information because of its importance in the medical field. Document 2::: Electronic health record confidentiality In the recent times, the individual patient's and population's health information is recorded in form of electronically accessible files known as electronic health records (EHR). These are digital records which can be easily transferred across the internet.A multitude of information is contained within the electronic health including billing information, patient's weight, age, vital signs, radiology images, laboratory test results, immunization status, allergies, medication, medical history and demographics etc. Document 3::: Genetic privacy Genetic privacy involves the concept of personal privacy concerning the storing, repurposing, provision to third parties, and displaying of information pertaining to one's genetic information. This concept also encompasses privacy regarding the ability to identify specific individuals by their genetic sequence, and the potential to gain information on specific characteristics about that person via portions of their genetic information, such as their propensity for specific diseases or their immediate or distant ancestry.With the public release of genome sequence information of participants in large-scale research studies, questions regarding participant privacy have been raised. In some cases, it has been shown that it is possible to identify previously anonymous participants from large-scale genetic studies that released gene sequence information.Genetic privacy concerns also arise in the context of criminal law because the government can sometimes overcome criminal suspects' genetic privacy interests and obtain their DNA sample. Due to the shared nature of genetic information between family members, this raises privacy concerns of relatives as well.As concerns and issues of genetic privacy are raised, regulations and policies have been developed in the United States both at a federal and state level. Document 4::: Data re-identification This assurance of privacy allows the government to legally share limited data sets with third parties without requiring written permission. Such data has proved to be very valuable for researchers, particularly in health care. The risk of re-identification is significantly reduced with GDPR-compliant pseudonymization which requires that data cannot be attributed to a specific data subject without the use of separately kept "additional information". Document 5::: Data masking Data masking or data obfuscation is the process of modifying sensitive data in such a way that it is of no or little value to unauthorized intruders while still being usable by software or authorized personnel. Data masking can also be referred as anonymization, or tokenization, depending on different context. The main reason to mask data is to protect information that is classified as personally identifiable information, or mission critical data.
epfl-collab
Which of the following acronyms does not designate a mode of operation?
['ECB', 'CRC', 'CTR', 'CBC']
B
null
Document 1::: Security Modes Generally, security modes refer to information systems security modes of operations used in mandatory access control (MAC) systems. Often, these systems contain information at various levels of security classification. The mode of operation is determined by: The type of users who will be directly or indirectly accessing the system. The type of data, including classification levels, compartments, and categories, that are processed on the system. The type of levels of users, their need to know, and formal access approvals that the users will have. Document 2::: Operating model The term is most commonly used today when referring to the way a single business division or single function operates, as in 'the operating model of the exploration division' or 'the operating model of the HR function'. It can also be used at a much more micro level to describe how a department within a function works or how a factory is laid out. Document 3::: Operating model The term is most commonly used today when referring to the way a single business division or single function operates, as in 'the operating model of the exploration division' or 'the operating model of the HR function'. It can also be used at a much more micro level to describe how a department within a function works or how a factory is laid out. Document 4::: Cipher-block chaining In cryptography, a block cipher mode of operation is an algorithm that uses a block cipher to provide information security such as confidentiality or authenticity. A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group of bits called a block. A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.Most modes require a unique binary sequence, often called an initialization vector (IV), for each encryption operation. The IV has to be non-repeating and, for some modes, random as well. Document 5::: EAX mode EAX mode (encrypt-then-authenticate-then-translate) is a mode of operation for cryptographic block ciphers. It is an Authenticated Encryption with Associated Data (AEAD) algorithm designed to simultaneously provide both authentication and privacy of the message (authenticated encryption) with a two-pass scheme, one pass for achieving privacy and one for authenticity for each block. EAX mode was submitted on October 3, 2003 to the attention of NIST in order to replace CCM as standard AEAD mode of operation, since CCM mode lacks some desirable attributes of EAX and is more complex.
epfl-collab
Select the \emph{incorrect} statement. The brute force attack \dots
['can break a cipher with a $128$-bit key on your PC today.', 'has higher worst case complexity than average case complexity.', 'refers to a way of getting the secret key, exhaustively.', "can be applicable after decades according to Moore's law."]
A
null
Document 1::: Brute force attack In cryptography, a brute-force attack consists of an attacker submitting many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search. Document 2::: Brute force attack A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier. When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Document 3::: Padding oracle attack In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks. Document 4::: Brute force attack Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones.Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack, with 'anti-hammering' for countermeasures. Document 5::: Inference attack An Inference Attack is a data mining technique performed by analyzing data in order to illegitimately gain knowledge about a subject or database. A subject's sensitive information can be considered as leaked if an adversary can infer its real value with a high confidence. This is an example of breached information security. An Inference attack occurs when a user is able to infer from trivial information more robust information about a database without directly accessing it.
epfl-collab
WEP \dots
['is badly broken.', 'provides good confidentiality.', 'provides good authentication.', 'provides good message integrity.']
A
null
Document 1::: Dot-decimal notation Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each. Document 2::: Machine Identification Code A Machine Identification Code (MIC), also known as printer steganography, yellow dots, tracking dots or secret dots, is a digital watermark which certain color laser printers and copiers leave on every printed page, allowing identification of the device which was used to print a document and giving clues to the originator. Developed by Xerox and Canon in the mid-1980s, its existence became public only in 2004. In 2018, scientists developed privacy software to anonymize prints in order to support whistleblowers publishing their work. Document 3::: Dot plot (statistics) A dot chart or dot plot is a statistical chart consisting of data points plotted on a fairly simple scale, typically using filled in circles. There are two common, yet very different, versions of the dot chart. The first has been used in hand-drawn (pre-computer era) graphs to depict distributions going back to 1884. The other version is described by William S. Cleveland as an alternative to the bar chart, in which dots are used to depict the quantitative values (e.g. counts) associated with categorical variables. Document 4::: Lewis Structure Lewis structures – also called Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDs) – are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. A Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. The Lewis structure was named after Gilbert N. Lewis, who introduced it in his 1916 article The Atom and the Molecule. Document 5::: Nine dots problem The nine dots puzzle is a mathematical puzzle whose task is to connect nine squarely arranged points with a pen by four (or fewer) straight lines without lifting the pen. The puzzle has appeared under various other names over the years.
epfl-collab
The DES key schedule\dots
['\\dots in only used during the encryption phase, not during the decryption phase.', '\\dots generates 16 subkeys.', '\\dots takes as an input a key of 128 bits.', '\\dots is based on a Feistel scheme.']
B
null
Document 1::: Key Sequenced Data Set A key-sequenced data set (KSDS) is a type of data set used by IBM's VSAM computer data storage system. : 5 Each record in a KSDS data file is embedded with a unique key. : 20 A KSDS consists of two parts, the data component and a separate index file known as the index component which allows the system to physically locate the record in the data file by its key value. : 13 Together, the data and index components are called a cluster. Document 2::: CIKS-1 CIKS-1 uses four types of operations: data-dependent permutations, fixed permutations, XORs, and addition mod 4. The designers of CIKS-1 didn't specify any key schedule for the cipher, but it uses a total key size of 256 bits. Kidney, Heys, and Norvell showed that round keys of low Hamming weight are relatively weak, so keys should be chosen carefully. The same researchers have also proposed a differential cryptanalysis of CIKS-1 which uses 256 chosen plaintexts. Document 3::: M6 (cipher) The key schedule is very simple, producing two 32-bit subkeys: the high 32 bits of the key, and the sum mod 232 of this and the low 32 bits. Because its round function is based on rotation and addition, M6 was one of the first ciphers attacked by mod n cryptanalysis. Mod 5, about 100 known plaintexts suffice to distinguish the output from a pseudorandom permutation. Document 4::: UES (cipher) In cryptography, UES (Universal Encryption Standard) is a block cipher designed in 1999 by Helena Handschuh and Serge Vaudenay. They proposed it as a transitional step, to prepare for the completion of the AES process. UES was designed with the same interface as AES: a block size of 128 bits and key size of 128, 192, or 256 bits. It consists of two parallel Triple DES encryptions on the halves of the block, with key whitening and key-dependent swapping of bits between the halves. The key schedule is taken from DEAL. Document 5::: Vibroplex The advantage of the key over a standard telegraph key is that it automatically generates strings of one of the two pulses from which Morse code characters are composed, the shortest one or "dot" (or dits), so that the operator's hand does not have to make the rapid movements necessary to generate multiple dots. When the knob is pressed from the right, it makes a continuous contact suitable for sending "dashes" (or dahs). When the paddle is pressed from the left, a horizontal pendulum at the opposite end of the lever is set into motion, intermittently closing a set of contacts, sending a series of short pulses "dots" at a speed that is controlled by the position of the pendulum weight.