Dataset Viewer
Auto-converted to Parquet
source_dataset
stringclasses
1 value
question
stringlengths
6
1.87k
choices
stringlengths
20
1.02k
answer
stringclasses
4 values
rationale
float64
documents
stringlengths
1.01k
5.9k
epfl-collab
Which of the following scheduler policies are preemptive?
['RR (Round Robin)', 'FIFO (First In, First Out)', 'STCF (Shortest Time to Completion First)', 'SJF (Shortest Job First)']
C
null
Document 1::: Kernel preemption In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks). Document 2::: O(1) scheduler An O(1) scheduler (pronounced "O of 1 scheduler", "Big O of 1 scheduler", or "constant time scheduler") is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system. This is an improvement over previously used O(n) schedulers, which schedule processes in an amount of time that scales linearly based on the amounts of inputs. In the realm of real-time operating systems, deterministic execution is key, and an O(1) scheduler is able to provide scheduling services with a fixed upper-bound on execution times. The O(1) scheduler was used in Linux releases 2.6.0 thru 2.6.22 (2003-2007), at which point it was superseded by the Completely Fair Scheduler. Document 3::: Micro-Controller Operating Systems Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing. Document 4::: Max-min fairness In communication networks, multiplexing and the division of scarce resources, max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any participant necessarily results in the decrease in the allocation of some other participant with an equal or smaller allocation. In best-effort statistical multiplexing, a first-come first-served (FCFS) scheduling policy is often used. The advantage with max-min fairness over FCFS is that it results in traffic shaping, meaning that an ill-behaved flow, consisting of large data packets or bursts of many packets, will only punish itself and not other flows. Network congestion is consequently to some extent avoided. Fair queuing is an example of a max-min fair packet scheduling algorithm for statistical multiplexing and best-effort networks, since it gives scheduling priority to users that have achieved lowest data rate since they became active. In case of equally sized data packets, round-robin scheduling is max-min fair. Document 5::: Priority inversion In computer science, priority inversion is a scenario in scheduling in which a high priority task is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task.
epfl-collab
Which of the following are correct implementation for acquire function ? Assume 0 means UNLOCKED and 1 means LOCKED. Initially l->locked = 0.
['c \n void acquire(struct lock *l)\n {\n if(l->locked == 0) \n return;\n }', 'c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }', 'c \n void acquire(struct lock *l)\n {\n for(;;)\n if(cas(&l->locked, 1, 0) == 1)\n return;\n }', 'c \n void acquire(struct lock *l)\n {\n if(cas(&l->locked, 0, 1) == 0)\n return;\n }']
B
null
Document 1::: Test-and-set A lock can be built using an atomic test-and-set instruction as follows: This code assumes that the memory location was initialized to 0 at some point prior to the first test-and-set. The calling process obtains the lock if the old value was 0, otherwise the while-loop spins waiting to acquire the lock. This is called a spinlock. Document 2::: Phase-locked loop ranges {\displaystyle \exists t>T_{\text{lock}}:\left|\theta _{\Delta }(0)-\theta _{\Delta }(t)\right|\geq 2\pi .} Here, sometimes, the limit of the difference or the maximum of the difference is considered Definition of lock-in range. If the loop is in a locked state, then after an abrupt change of ω Δ free {\displaystyle \omega _{\Delta }^{\text{free}}} free within a lock-in range | ω Δ free | ≤ ω ℓ {\displaystyle \left|\omega _{\Delta }^{\text{free}}\right|\leq \omega _{\ell }} , the PLL acquires lock without cycle slipping. Here ω ℓ {\displaystyle \omega _{\ell }} is called lock-in frequency. == References == Document 3::: Phase-locked loop range Also called acquisition range, capture range.Assume that the loop power supply is initially switched off and then at t = 0 {\displaystyle t=0} the power is switched on, and assume that the initial frequency difference is sufficiently large. The loop may not lock within one beat note, but the VCO frequency will be slowly tuned toward the reference frequency (acquisition process). This effect is also called a transient stability. The pull-in range is used to name such frequency deviations that make the acquisition process possible (see, for example, explanations in Gardner (1966, p. Document 4::: Phase-locked loop ranges Such long acquisition process is called cycle slipping. If difference between initial and final phase deviation is larger than 2 π {\displaystyle 2\pi } , we say that cycle slipping takes place. ∃ t > T lock: | θ Δ ( 0 ) − θ Δ ( t ) | ≥ 2 π . Document 5::: Phase-locked loop ranges The terms hold-in range, pull-in range (acquisition range), and lock-in range are widely used by engineers for the concepts of frequency deviation ranges within which phase-locked loop-based circuits can achieve lock under various additional conditions.
epfl-collab
In which of the following cases does JOS acquire the big kernel lock?
['Processor traps in user mode', 'Switching from kernel mode to user mode', 'Processor traps in kernel mode', 'Initialization of application processor']
A
null
Document 1::: Java Optimized Processor Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA. Document 2::: System Contention Scope In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. == References == Document 3::: Kernel preemption In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks). Document 4::: Atomic lock In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications. Document 5::: O(1) scheduler An O(1) scheduler (pronounced "O of 1 scheduler", "Big O of 1 scheduler", or "constant time scheduler") is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system. This is an improvement over previously used O(n) schedulers, which schedule processes in an amount of time that scales linearly based on the amounts of inputs. In the realm of real-time operating systems, deterministic execution is key, and an O(1) scheduler is able to provide scheduling services with a fixed upper-bound on execution times. The O(1) scheduler was used in Linux releases 2.6.0 thru 2.6.22 (2003-2007), at which point it was superseded by the Completely Fair Scheduler.
epfl-collab
Assume a user program executes following tasks. Select all options that will use a system call.
['Read the user\'s input "Hello world" from the keyboard.', 'Send "Hello world" to another machine via Network Interface Card.', 'Write "Hello world" to a file.', 'Encrypt "Hello world" by AES.']
A
null
Document 1::: System call In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, accessing a hard disk drive or accessing the device's camera), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system. In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, privileged system code also issues system calls. Document 2::: Choice (command) In computing, choice is a command that allows for batch files to prompt the user to select one item from a set of single-character choices. It is available in a number of operating system command-line shells. Document 3::: Systems programming Systems programming, or system programming, is the activity of programming computer system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications).Systems programming requires a great degree of hardware awareness. Its goal is to achieve efficient use of available resources, either because the software itself is performance critical or because even small efficiency improvements directly transform into significant savings of time or money. Document 4::: Process Explorer For example, it provides a means to list or search for named resources that are held by a process or all processes. This can be used to track down what is holding a file open and preventing its use by another program. As another example, it can show the command lines used to start a program, allowing otherwise identical processes to be distinguished. Like Task Manager, it can show a process that is maxing out the CPU, but unlike Task Manager it can show which thread (with the callstack) is using the CPU – information that is not even available under a debugger. Document 5::: Invoke operator (computer programming) Programs for a computer may be executed in a batch process without human interaction or a user may type commands in an interactive session of an interpreter. In this case, the "commands" are simply program instructions, whose execution is chained together. The term run is used almost synonymously. A related meaning of both "to run" and "to execute" refers to the specific action of a user starting (or launching or invoking) a program, as in "Please run the application."
epfl-collab
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
['Bugs in one process can cause a machine to freeze up', 'It can lead to poor response time for processes', 'It can lead to starvation especially for those real-time tasks', 'Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.']
C
null
Document 1::: Least slack time scheduling This algorithm is also known as least laxity first. Its most common use is in embedded systems, especially those with multiple processors. It imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do not have an affinity to a certain processor. This is what lends it a suitability to embedded systems. Document 2::: Two-level scheduling If this variable is not considered resource starvation may occur and a process may not complete at all. Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process. Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster. Document 3::: Two-level scheduling Exactly how it selects processes is up to the implementation of the higher-level scheduler. A compromise has to be made involving the following variables: Response time: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessly long. Document 4::: Nondeterministic algorithm In computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithm's behaviors depends on a random number generator. Document 5::: Micro-Controller Operating Systems Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing.
epfl-collab
Select valid answers about file descriptors (FD):
['FD is usually used as an argument for read and write.', 'The value of FD is unique for every file in the operating system.', 'FD is constructed by hashing the filename.', 'FDs are preserved after fork() and can be used in the new process pointing to the original files.']
A
null
Document 1::: Data descriptor In computing, a data descriptor is a structure containing information that describes data. Data descriptors may be used in compilers, as a software structure at run time in languages like Ada or PL/I, or as a hardware structure in some computers such as Burroughs large systems. Data descriptors are typically used at run-time to pass argument information to called subroutines. HP OpenVMS and Multics have system-wide language-independent standards for argument descriptors. Descriptors are also used to hold information about data that is only fully known at run-time, such as a dynamically allocated array. Document 2::: Compound File Binary Format Compound File Binary Format (CFBF), also called Compound File, Compound Document format, or Composite Document File V2 (CDF), is a compound document file format for storing numerous files and streams within a single file on a disk. CFBF is developed by Microsoft and is an implementation of Microsoft COM Structured Storage.Microsoft has opened the format for use by others and it is now used in a variety of programs from Microsoft Word and Microsoft Access to Business Objects. It also forms the basis of the Advanced Authoring Format. Document 3::: Disk Filing System Each filename can be up to seven letters long, plus one letter for the directory in which the file is stored.The DFS is remarkable in that unlike most filing systems, there was no single vendor or implementation. The original DFS was written by Acorn, who continued to maintain their own codebase, but various disc drive vendors wrote their own implementations. Companies who wrote their own DFS implementations included Cumana, Solidisk, Opus and Watford Electronics. Document 4::: Segment descriptor In memory addressing for Intel x86 computer architectures, segment descriptors are a part of the segmentation unit, used for translating a logical address to a linear address. Segment descriptors describe the memory segment referred to in the logical address. The segment descriptor (8 bytes long in 80286 and later) contains the following fields: A segment base address The segment limit which specifies the segment size Access rights byte containing the protection mechanism information Control bits Document 5::: Fire Dynamics Simulator It models vegetative fuel either by explicitly defining the volume of the vegetation or, for surface fuels such as grass, by assuming uniform fuel at the air-ground boundary.FDS is a Fortran program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files. Smokeview is a companion program that reads FDS output files and produces animations on the computer screen. Smokeview has a simple menu-driven interface, while FDS does not. However, there are various third-party programs that have been developed to generate the text file containing the input parameters needed by FDS.
epfl-collab
Suppose a file system used only for reading immutable files in random fashion. What is the best block allocation strategy?
['Index allocation with Hash-table', 'Index allocation with B-tree', 'Linked-list allocation', 'Continuous allocation']
D
null
Document 1::: Block size (data storage and transmission) Some newer file systems, such as Btrfs and FreeBSD UFS2, attempt to solve this through techniques called block suballocation and tail merging. Other file systems such as ZFS support variable block sizes.Block storage is normally abstracted by a file system or database management system (DBMS) for use by applications and end users. Document 2::: Block size (data storage and transmission) Most file systems are based on a block device, which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size. This leads to space inefficiency due to internal fragmentation, since file lengths are often not integer multiples of block size, and thus the last block of a file may remain partially empty. This will create slack space. Document 3::: Extent (file systems) Extent-based file systems can also eliminate most of the metadata overhead of large files that would traditionally be taken up by the block-allocation tree. But because the savings are small compared to the amount of stored data (for all file sizes in general) but make up a large portion of the metadata (for large files), the overall benefits in storage efficiency and performance are slight.In order to resist fragmentation, several extent-based file systems do allocate-on-flush. Many modern fault-tolerant file systems also do copy-on-write, although that increases fragmentation. Document 4::: Block-level storage Block-level storage is a concept in cloud-hosted data persistence where cloud services emulate the behaviour of a traditional block device, such as a physical hard drive.Storage in such services is organised as blocks. This emulates the type of behaviour seen in traditional disks or tape storage through storage virtualization. Blocks are identified by an arbitrary and assigned identifier by which they may be stored and retrieved, but this has no obvious meaning in terms of files or documents. A file system must be applied on top of the block-level storage to map 'files' onto a sequence of blocks. Document 5::: Delayed allocation Allocate-on-flush (also called delayed allocation) is a file system feature implemented in HFS+, XFS, Reiser4, ZFS, Btrfs, and ext4. The feature also closely resembles an older technique that Berkeley's UFS called "block reallocation". When blocks must be allocated to hold pending writes, disk space for the appended data is subtracted from the free-space counter, but not actually allocated in the free-space bitmap. Instead, the appended data are held in memory until they must be flushed to storage due to memory pressure, when the kernel decides to flush dirty buffers, or when the application performs the Unix sync system call, for example.
epfl-collab
Which of the following operations would switch the user program from user space to kernel space?
['Calling sin() in math library.', 'Jumping to an invalid address.', 'Invoking read() syscall.', 'Dividing integer by 0.']
D
null
Document 1::: OS kernel In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel. Document 2::: Disk swapping In order to use a function of the program not loaded into memory, the user would have to first remove the data disk, then insert the program disk. When the user then wanted to save their file, the reverse operation would have to be performed. Document 3::: OS kernel It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit. The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application software or other less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space. Document 4::: OS kernel The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader). Document 5::: OS kernel There are different kernel architecture designs. Monolithic kernels run entirely in a single address space with the CPU executing in supervisor mode, mainly for speed. Microkernels run most but not all of their services in user space, like user processes do, mainly for resilience and modularity.
epfl-collab
Which flag prevents user programs from reading and writing kernel data?
['PTE_P', 'PTE_W', 'PTE_U', 'PTE_D']
C
null
Document 1::: OS kernel In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel. Document 2::: OS kernel The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader). Document 3::: Capsicum (Unix) A process can also receive capabilities via Unix sockets. These file descriptors not only control access to the file system, but also to other devices like the network sockets. Flags are also used to control more fine-grained access like reads and writes. Document 4::: OS kernel It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit. The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application software or other less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space. Document 5::: CPU modes CPU modes (also called processor modes, CPU states, CPU privilege levels and other names) are operating modes for the central processing unit of some computer architectures that place restrictions on the type and scope of operations that can be performed by certain processes being run by the CPU. This design allows the operating system to run with more privileges than application software.Ideally, only highly trusted kernel code is allowed to execute in the unrestricted mode; everything else (including non-supervisory portions of the operating system) runs in a restricted mode and must use a system call (via interrupt) to request the kernel perform on its behalf any operation that could damage or compromise the system, making it impossible for untrusted programs to alter or damage other programs (or the computing system itself). In practice, however, system calls take time and can hurt the performance of a computing system, so it is not uncommon for system designers to allow some time-critical software (especially device drivers) to run with full kernel privileges. Multiple modes can be implemented—allowing a hypervisor to run multiple operating system supervisors beneath it, which is the basic design of many virtual machine systems available today.
epfl-collab
In which of the following cases does the TLB need to be flushed?
['Inserting a new page into the page table for kernel.', 'Inserting a new page into the page table for a user-space application.', 'Changing the read/write permission bit in the page table.', 'Deleting a page from the page table.']
D
null
Document 1::: Translation look-aside buffer A translation lookaside buffer (TLB) is a memory cache that stores the recent translations of virtual memory to physical memory. It is used to reduce the time taken to access a user memory location. It can be called an address-translation cache. It is a part of the chip's memory-management unit (MMU). Document 2::: Translation look-aside buffer A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that utilizes paged or segmented virtual memory. The TLB is sometimes implemented as content-addressable memory (CAM). Document 3::: Translation look-aside buffer The CAM search key is the virtual address, and the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match quickly and the retrieved physical address can be used to access memory. This is called a TLB hit. Document 4::: Delayed allocation Allocate-on-flush (also called delayed allocation) is a file system feature implemented in HFS+, XFS, Reiser4, ZFS, Btrfs, and ext4. The feature also closely resembles an older technique that Berkeley's UFS called "block reallocation". When blocks must be allocated to hold pending writes, disk space for the appended data is subtracted from the free-space counter, but not actually allocated in the free-space bitmap. Instead, the appended data are held in memory until they must be flushed to storage due to memory pressure, when the kernel decides to flush dirty buffers, or when the application performs the Unix sync system call, for example. Document 5::: Translation look-aside buffer If the requested address is not in the TLB, it is a miss, and the translation proceeds by looking up the page table in a process called a page walk. The page walk is time-consuming when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB. The PowerPC 604, for example, has a two-way set-associative TLB for data loads and stores. Some processors have different instruction and data address TLBs.
epfl-collab
In x86, select all synchronous exceptions?
['Divide error', 'Page Fault', 'Timer', 'Keyboard']
A
null
Document 1::: Triple fault On the x86 computer architecture, a triple fault is a special kind of exception generated by the CPU when an exception occurs while the CPU is trying to invoke the double fault exception handler, which itself handles exceptions occurring while trying to invoke a regular exception handler. x86 processors beginning with the 80286 will cause a shutdown cycle to occur when a triple fault is encountered. This typically causes the motherboard hardware to initiate a CPU reset, which, in turn, causes the whole computer to reboot. Document 2::: Segmentation violation Processes can in some cases install a custom signal handler, allowing them to recover on their own, but otherwise the OS default signal handler is used, generally causing abnormal termination of the process (a program crash), and sometimes a core dump. Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access. Document 3::: Segmentation violation In computing, a segmentation fault (often shortened to segfault) or access violation is a fault, or failure condition, raised by hardware with memory protection, notifying an operating system (OS) the software has attempted to access a restricted area of memory (a memory access violation). On standard x86 computers, this is a form of general protection fault. The operating system kernel will, in response, usually perform some corrective action, generally passing the fault on to the offending process by sending the process a signal. Document 4::: Exception handling syntax Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it. Most commonly, error handling uses a try... block, and errors are created via a throw statement, but there is significant variation in naming and syntax. Document 5::: Transactional Synchronization Extensions Transactional Synchronization Extensions (TSX), also called Transactional Synchronization Extensions New Instructions (TSX-NI), is an extension to the x86 instruction set architecture (ISA) that adds hardware transactional memory support, speeding up execution of multi-threaded software through lock elision. According to different benchmarks, TSX/TSX-NI can provide around 40% faster applications execution in specific workloads, and 4–5 times more database transactions per second (TPS).TSX/TSX-NI was documented by Intel in February 2012, and debuted in June 2013 on selected Intel microprocessors based on the Haswell microarchitecture. Haswell processors below 45xx as well as R-series and K-series (with unlocked multiplier) SKUs do not support TSX/TSX-NI. In August 2014, Intel announced a bug in the TSX/TSX-NI implementation on current steppings of Haswell, Haswell-E, Haswell-EP and early Broadwell CPUs, which resulted in disabling the TSX/TSX-NI feature on affected CPUs via a microcode update.In 2016, a side-channel timing attack was found by abusing the way TSX/TSX-NI handles transactional faults (i.e. page faults) in order to break kernel address space layout randomization (KASLR) on all major operating systems. In 2021, Intel released a microcode update that disabled the TSX/TSX-NI feature on CPU generations from Skylake to Coffee Lake, as a mitigation for discovered security issues.Support for TSX/TSX-NI emulation is provided as part of the Intel Software Development Emulator. There is also experimental support for TSX/TSX-NI emulation in a QEMU fork.
epfl-collab
Which of the execution of an application are possible on a single-core machine?
['Both concurrent and parallel execution', 'Parallel execution', 'Neither concurrent or parallel execution', 'Concurrent execution']
D
null
Document 1::: Superscalar execution A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic unit. Document 2::: Many-core processing unit Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading. Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to host processors).The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. Document 3::: Single cycle processor A single cycle processor is a processor that carries out one instruction in a single clock cycle. Document 4::: Many-core processing unit A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. Document 5::: VISC architecture This form of multithreading can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs.
epfl-collab
In an x86 multiprocessor system with JOS, select all the correct options. Assume every Env has a single thread.
['One Env could run on two different processors at different times.', 'Two Envs could run on the same processor simultaneously.', 'Two Envs could run on two different processors simultaneously.', 'One Env could run on two different processors simultaneously.']
C
null
Document 1::: Java Optimized Processor Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA. Document 2::: MultiProcessor Specification The MultiProcessor Specification (MPS) for the x86 architecture is an open standard describing enhancements to both operating systems and firmware, which will allow them to work with x86-compatible processors in a multi-processor configuration. MPS covers Advanced Programmable Interrupt Controller (APIC) architectures. Version 1.1 of the specification was released on April 11, 1994. Version 1.4 of the specification was released on July 1, 1995, which added extended configuration tables to improve support for multiple PCI bus configurations and improve expandability. Document 3::: System Contention Scope In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. == References == Document 4::: Cellular multiprocessing Cellular multiprocessing is a multiprocessing computing architecture designed initially for Intel central processing units from Unisys, a worldwide information technology consulting company. It consists of the partitioning of processors into separate computing environments running different operating systems. Providing up to 32 processors that are crossbar connected to 64GB of memory and 96 PCI cards, a CMP system provides mainframe-like architecture using Intel CPUs. CMP supports Windows NT and Windows 2000 Server, AIX, Novell NetWare and UnixWare and can be run as one large SMP system or multiple systems with variant operating systems. Document 5::: Cray J90 All input/output in a J90 system was handled by an IOS (Input/Output Subsystem) called IOS Model V. The IOS-V was based on the VME64 bus and SPARC I/O processors (IOPs) running the VxWorks RTOS. The IOS was programmed to emulate the IOS Model E, used in the larger Cray Y-MP systems, in order to minimize changes in the UNICOS operating system. By using standard VME boards, a wide variety of commodity peripherals could be used.
epfl-collab
In JOS, suppose a value is passed between two Envs. What is the minimum number of executed system calls?
['2', '4', '1', '3']
A
null
Document 1::: Virtual Execution System The Virtual Execution System (VES) is a run-time system of the Common Language Infrastructure CLI which provides an environment for executing managed code. It provides direct support for a set of built-in data types, defines a hypothetical machine with an associated machine model and state, a set of control flow constructs, and an exception handling model. To a large extent, the purpose of the VES is to provide the support required to execute the Common Intermediate Language CIL instruction set. Document 2::: Instruction path length In computer performance, the instruction path length is the number of machine code instructions required to execute a section of a computer program. The total path length for the entire program could be deemed a measure of the algorithm's performance on a particular computer hardware. The path length of a simple conditional instruction would normally be considered as equal to 2, one instruction to perform the comparison and another to take a branch if the particular condition is satisfied. Document 3::: Java Optimized Processor Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA. Document 4::: Io (programming language) Io uses actors for concurrency. Remarkable features of Io are its minimal size and openness to using external code resources. Io is executed by a small, portable virtual machine. Document 5::: Linear Code Sequence and Jump An LCSAJ is a software code path fragment consisting of a sequence of code (a linear code sequence) followed by a control flow Jump, and consists of the following three items: the start of the linear sequence of executable statements the end of the linear sequence the target line to which control flow is transferred at the end of the linear sequence.Unlike (maximal) basic blocks, LCSAJs can overlap with each other because a jump (out) may occur in the middle of an LCSAJ, while it isn't allowed in the middle of a basic block. In particular, conditional jumps generate overlapping LCSAJs: one which runs through to where the condition evaluates to false and another that ends at the jump when the condition evaluates to true (the example given further below in this article illustrates such an occurrence). According to a monograph from 1986, LCSAJs were typically four times larger than basic blocks.The formal definition of a LCSAJ can be given in terms of basic blocks as follows: a sequence of one or more consecutively numbered basic blocks, p, (p+1), ..., q, of a code unit, followed by a control flow jump either out of the code or to a basic block numbered r, where r≠(q+1), and either p=1 or there exists a control flow jump to block p from some other block in the unit. (A basic block to which such a control flow jump can be made is referred to as a target of the jump.) According to Jorgensen's 2013 textbook, outside Great Britain and ISTQB literature, the same notion is called DD-path.
epfl-collab
What strace tool does?
['To remove wildcards from the string.', 'It prints out system calls for given program. These systems calls are called only for that particular instance of the program.', 'To trace a symlink. I.e. to find where the symlink points to.', 'It prints out system calls for given program. These system calls are always called when executing the program.']
B
null
Document 1::: Strace strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. The operation of strace is made possible by the kernel feature known as ptrace. Some Unix-like systems provide other diagnostic tools similar to strace, such as truss. Document 2::: Malware research The executed binary code is traced using strace or more precise taint analysis to compute data-flow dependencies among system calls. The result is a directed graph G = ( V , E ) {\displaystyle G=(V,E)} such that nodes are system calls, and edges represent dependencies. For example, ( s , t ) ∈ E {\displaystyle (s,t)\in E} if a result returned by system call s {\displaystyle s} (either directly as a result or indirectly through output parameters) is later used as a parameter of system call t {\displaystyle t} . Document 3::: DTrace DTrace is a comprehensive dynamic tracing framework originally created by Sun Microsystems for troubleshooting kernel and application problems on production systems in real time. Originally developed for Solaris, it has since been released under the free Common Development and Distribution License (CDDL) in OpenSolaris and its descendant illumos, and has been ported to several other Unix-like systems. DTrace can be used to get a global overview of a running system, such as the amount of memory, CPU time, filesystem and network resources used by the active processes. Document 4::: Synthesis Toolkit The Synthesis Toolkit (STK) is an open source API for real time audio synthesis with an emphasis on classes to facilitate the development of physical modelling synthesizers. It is written in C++ and is written and maintained by Perry Cook at Princeton University and Gary Scavone at McGill University. It contains both low-level synthesis and signal processing classes (oscillators, filters, etc.) and higher-level instrument classes which contain examples of most of the currently available physical modelling algorithms in use today. Document 5::: Synthesis Toolkit STK is free software, but a number of its classes, particularly some physical modelling algorithms, are covered by patents held by Stanford University and Yamaha.The STK is used widely in creating software synthesis applications. Versions of the STK instrument classes have been integrated into ChucK, Csound, Real-Time Cmix, Max/MSP (as part of PeRColate), SuperCollider and FAUST. It has been ported to SymbianOS and iOS as well.
epfl-collab
What is a good distance metric to be used when you want to compute the similarity between documents independent of their length?A penalty will be applied for any incorrect answers.
['Chi-squared distance', 'Manhattan distance', 'Euclidean distance', 'Cosine similarity']
D
null
Document 1::: Similarity measure In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms. Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model. In machine learning, common kernel functions such as the RBF kernel can be viewed as similarity functions. Document 2::: Cosine similarity Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from 0 → 1 {\displaystyle 0\to 1} , since the term frequencies cannot be negative. This remains true when using TF-IDF weights. Document 3::: Jaro–Winkler distance In computer science and statistics, the Jaro–Winkler similarity is a string metric measuring an edit distance between two sequences. It is a variant of the Jaro distance metric metric (1989, Matthew A. Jaro) proposed in 1990 by William E. Winkler.The Jaro–Winkler distance uses a prefix scale p {\displaystyle p} which gives more favourable ratings to strings that match from the beginning for a set prefix length ℓ {\displaystyle \ell } . The higher the Jaro–Winkler distance for two strings is, the less similar the strings are. The score is normalized such that 0 means an exact match and 1 means there is no similarity. The original paper actually defined the metric in terms of similarity, so the distance is defined as the inversion of that value (distance = 1 − similarity). Although often referred to as a distance metric, the Jaro–Winkler distance is not a metric in the mathematical sense of that term because it does not obey the triangle inequality. Document 4::: Information distance Information distance is the distance between two finite objects (represented as computer files) expressed as the number of bits in the shortest program which transforms one object into the other one or vice versa on a universal computer. This is an extension of Kolmogorov complexity. The Kolmogorov complexity of a single finite object is the information in that object; the information distance between a pair of finite objects is the minimum information required to go from one object to the other or vice versa. Document 5::: Edit distance In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to transform one string into the other. Edit distances find applications in natural language processing, where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T. Different definitions of an edit distance use different sets of string operations. Levenshtein distance operations are the removal, insertion, or substitution of a character in the string. Being the most common metric, the term Levenshtein distance is often used interchangeably with edit distance.
epfl-collab
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Which of the following associations can be considered as illustrative examples for inflectional morphology (with here the simplifying assumption that canonical forms are restricted to the roots only)?
['(hypothesis, hypotheses)', '(to go, went)', '(speaking, talking)', '(activate, action)']
A
null
Document 1::: Inflection In linguistic morphology, inflection (or inflexion) is a process of word formation in which a word is modified to express different grammatical categories such as tense, case, voice, aspect, person, number, gender, mood, animacy, and definiteness. The inflection of verbs is called conjugation, and one can refer to the inflection of nouns, adjectives, adverbs, pronouns, determiners, participles, prepositions and postpositions, numerals, articles, etc., as declension. An inflection expresses grammatical categories with affixation (such as prefix, suffix, infix, circumfix, and transfix), apophony (as Indo-European ablaut), or other modifications. For example, the Latin verb ducam, meaning "I will lead", includes the suffix -am, expressing person (first), number (singular), and tense-mood (future indicative or present subjunctive). Document 2::: Inflection Analytic languages that do not make use of derivational morphemes, such as Standard Chinese, are said to be isolating. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known as concord or agreement. Document 3::: Inflection Words that are never subject to inflection are said to be invariant; for example, the English verb must is an invariant item: it never takes a suffix or changes form to signify a different grammatical category. Its categories can be determined only from its context. Languages that seldom make use of inflection, such as English, are said to be analytic. Document 4::: Inflection For example, in "the man jumps", "man" is a singular noun, so "jump" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection are synthetic languages. These can be highly inflected (such as Latin, Greek, Biblical Hebrew, and Sanskrit), or slightly inflected (such as English, Dutch, Persian). Languages that are so inflected that a sentence can consist of a single highly inflected word (such as many Native American languages) are called polysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such as Finnish, are known as agglutinative languages, while languages in which a single inflection can convey multiple grammatical roles (such as both nominative case and plural, as in Latin and German) are called fusional. Document 5::: Inflection The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the word lead is not inflected for any of person, number, or tense; it is simply the bare form of a verb. The inflected form of a word often contains both one or more free morphemes (a unit of meaning which can stand by itself as a word), and one or more bound morphemes (a unit of meaning which cannot stand alone as a word).
epfl-collab
Which of the following statements are true?
['A $k$-nearest-neighbor classifier is sensitive to outliers.', 'k-nearest-neighbors cannot be used for regression.', 'The more training examples, the more accurate the prediction of a $k$-nearest-neighbor classifier.', 'Training a $k$-nearest-neighbor classifier takes more computational time than applying it / using it for prediction.']
C
null
Document 1::: Markov property (group theory) In the mathematical subject of group theory, the Adian–Rabin theorem is a result that states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adian (1955) and, independently, Michael O. Rabin (1958). Document 2::: Rice theorem In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every partial computable function, nor false for every partial computable function. Document 3::: Remarks on the Foundations of Mathematics Thus it can only be true, but unprovable." Just as we can ask, " 'Provable' in what system?," so we must also ask, "'True' in what system?" "True in Russell's system" means, as was said, proved in Russell's system, and "false" in Russell's system means the opposite has been proved in Russell's system.—Now, what does your "suppose it is false" mean? Document 4::: Löwenheim–Skolem theorem As a consequence, first-order theories are unable to control the cardinality of their infinite models. The (downward) Löwenheim–Skolem theorem is one of the two key properties, along with the compactness theorem, that are used in Lindström's theorem to characterize first-order logic. In general, the Löwenheim–Skolem theorem does not hold in stronger logics such as second-order logic. Document 5::: Binary relations The statement ( x , y ) ∈ R {\displaystyle (x,y)\in R} reads "x is R-related to y" and is denoted by xRy. The domain of definition or active domain of R is the set of all x such that xRy for at least one y. The codomain of definition, active codomain, image or range of R is the set of all y such that xRy for at least one x. The field of R is the union of its domain of definition and its codomain of definition.When X = Y , {\displaystyle X=Y,} a binary relation is called a homogeneous relation (or endorelation). To emphasize the fact that X and Y are allowed to be different, a binary relation is also called a heterogeneous relation.In a binary relation, the order of the elements is important; if x ≠ y {\displaystyle x\neq y} then yRx can be true or false independently of xRy. For example, 3 divides 9, but 9 does not divide 3.
epfl-collab
In Text Representation learning, which of the following statements is correct?
['FastText performs unsupervised learning of word vectors.', 'If you fix all word vectors, and only train the remaining parameters, then FastText in the two-class case reduces to being just a linear classifier.', 'Learning GloVe vectors can be done using SGD in a streaming fashion, by streaming through the input text only once.', 'Every recommender systems algorithm for learning a matrix factorization $\\boldsymbol{W} \\boldsymbol{Z}^{\\top}$ approximating the observed entries in least square sense does also apply to learn GloVe word vectors.']
D
null
Document 1::: Sequence labeling In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose the globally best set of labels for the entire sequence at once. Document 2::: Sequence labeling In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose the globally best set of labels for the entire sequence at once. Document 3::: Feature learning In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. Document 4::: Feature learning In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. Document 5::: Feature learning In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model and it must produce the ground truth label as the correct answer. This can be leveraged to generate feature representations with the model which result in high label prediction accuracy.
epfl-collab
Consider a matrix factorization problem of the form $\mathbf{X}=\mathbf{W Z}^{\top}$ to obtain an item-user recommender system where $x_{i j}$ denotes the rating given by $j^{\text {th }}$ user to the $i^{\text {th }}$ item . We use Root mean square error (RMSE) to gauge the quality of the factorization obtained. Select the correct option.
['Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from scratch to generate robust ratings for the user-item pairs containing this item.', 'For obtaining a robust factorization of a matrix $\\mathbf{X}$ with $D$ rows and $N$ elements where $N \\ll D$, the latent dimension $\\mathrm{K}$ should lie somewhere between $D$ and $N$.', 'None of the other options are correct.', 'Regularization terms for $\\mathbf{W}$ and $\\mathbf{Z}$ in the form of their respective Frobenius norms are added to the RMSE so that the resulting objective function becomes convex.']
C
null
Document 1::: Maximum inner-product search Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommendation algorithms and machine learning.Formally, for a database of vectors x i {\displaystyle x_{i}} defined over a set of labels S {\displaystyle S} in an inner product space with an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } defined on it, MIPS search can be defined as the problem of determining a r g m a x i ∈ S ⟨ x i , q ⟩ {\displaystyle {\underset {i\in S}{\operatorname {arg\,max} }}\ \langle x_{i},q\rangle } for a given query q {\displaystyle q} . Although there is an obvious linear-time implementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search.Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to a nearest neighbor search (NNS) problem in which maximizing the inner product is equivalent to minimizing the corresponding distance metric in the NNS problem. Like other forms of NNS, MIPS algorithms may be approximate or exact.MIPS search is used as part of DeepMind's RETRO algorithm. Document 2::: Matrix factorization (algebra) (1) For S = C ] {\displaystyle S=\mathbb {C} ]} and f = x n {\displaystyle f=x^{n}} there is a matrix factorization d 0: S ⇄ S: d 1 {\displaystyle d_{0}:S\rightleftarrows S:d_{1}} where d 0 = x i , d 1 = x n − i {\displaystyle d_{0}=x^{i},d_{1}=x^{n-i}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . (2) If S = C ] {\displaystyle S=\mathbb {C} ]} and f = x y + x z + y z {\displaystyle f=xy+xz+yz} , then there is a matrix factorization d 0: S 2 ⇄ S 2: d 1 {\displaystyle d_{0}:S^{2}\rightleftarrows S^{2}:d_{1}} where d 0 = d 1 = {\displaystyle d_{0}={\begin{bmatrix}z&y\\x&-x-y\end{bmatrix}}{\text{ }}d_{1}={\begin{bmatrix}x+y&y\\x&-z\end{bmatrix}}} Document 3::: LU factorization Let A be a square matrix. An LU factorization refers to the factorization of A, with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix L and an upper triangular matrix U: A = L U . {\displaystyle A=LU.} In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero. Document 4::: Iterative proportional fitting We have also the entropy maximization, information loss minimization (or cross-entropy) or RAS which consists of factoring the matrix rows to match the specified row totals, then factoring its columns to match the specified column totals; each step usually disturbs the previous step’s match, so these steps are repeated in cycles, re-adjusting the rows and columns in turn, until all specified marginal totals are satisfactorily approximated. However, all algorithms give the same solution. In three- or more-dimensional cases, adjustment steps are applied for the marginals of each dimension in turn, the steps likewise repeated in cycles. Document 5::: Factorization Factorization may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix L with all diagonal entries equal to one, an upper triangular matrix U, and a permutation matrix P; this is a matrix formulation of Gaussian elimination.
epfl-collab
You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
['Because I should have used only one layer.', 'Because I should have used more layers.', 'Because it is almost impossible to train a network with 10 layers without a supercomputer.', 'Because we use exactly the same scheme.']
D
null
Document 1::: Learning rule Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations. The learning rule is one of the factors which decides how fast or how accurately the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning: Unsupervised learning Supervised learning Reinforcement learning Document 2::: Learning rule Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations. The learning rule is one of the factors which decides how fast or how accurately the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning: Unsupervised learning Supervised learning Reinforcement learning Document 3::: Odds algorithm In decision theory, the odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds strategy, and the importance of the odds strategy lies in its optimality, as explained below. The odds algorithm applies to a class of problems called last-success problems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event"). Document 4::: Machine learning Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning.ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods. Document 5::: Logic learning machine Logic learning machine (LLM) is a machine learning method based on the generation of intelligible rules. LLM is an efficient implementation of the Switching Neural Network (SNN) paradigm, developed by Marco Muselli, Senior Researcher at the Italian National Research Council CNR-IEIIT in Genoa. LLM has been employed in many different sectors, including the field of medicine (orthopedic patient classification, DNA micro-array analysis and Clinical Decision Support Systems ), financial services and supply chain management.
epfl-collab
Which of the following is correct regarding Louvain algorithm?
['Modularity is always maximal for the communities found at the top level of the community hierarchy', 'Clique is the only topology of nodes where the algorithm detects the same communities, independently of the starting point', 'If n cliques of the same order are connected cyclically with n-1 edges, then the algorithm will always detect the same communities, independently of the starting point', 'It creates a hierarchy of communities with a common root']
C
null
Document 1::: Suurballe's algorithm In theoretical computer science and network routing, Suurballe's algorithm is an algorithm for finding two disjoint paths in a nonnegatively-weighted directed graph, so that both paths connect the same pair of vertices and have minimum total length. The algorithm was conceived by John W. Suurballe and published in 1974. The main idea of Suurballe's algorithm is to use Dijkstra's algorithm to find one path, to modify the weights of the graph edges, and then to run Dijkstra's algorithm a second time. Document 2::: Pohlig–Hellman algorithm In group theory, the Pohlig–Hellman algorithm, sometimes credited as the Silver–Pohlig–Hellman algorithm, is a special-purpose algorithm for computing discrete logarithms in a finite abelian group whose order is a smooth integer. The algorithm was introduced by Roland Silver, but first published by Stephen Pohlig and Martin Hellman (independent of Silver). Document 3::: Dijkstra–Scholten algorithm The Dijkstra–Scholten algorithm (named after Edsger W. Dijkstra and Carel S. Scholten) is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Dijkstra and Scholten in 1980.First, consider the case of a simple process graph which is a tree. A distributed computation which is tree-structured is not uncommon. Document 4::: Freivalds' algorithm Freivalds' algorithm (named after Rūsiņš Mārtiņš Freivalds) is a probabilistic randomized algorithm used to verify matrix multiplication. Given three n × n matrices A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} , a general problem is to verify whether A × B = C {\displaystyle A\times B=C} . A naïve algorithm would compute the product A × B {\displaystyle A\times B} explicitly and compare term by term whether this product equals C {\displaystyle C} . However, the best known matrix multiplication algorithm runs in O ( n 2.3729 ) {\displaystyle O(n^{2.3729})} time. Freivalds' algorithm utilizes randomization in order to reduce this time bound to O ( n 2 ) {\displaystyle O(n^{2})} with high probability. In O ( k n 2 ) {\displaystyle O(kn^{2})} time the algorithm can verify a matrix product with probability of failure less than 2 − k {\displaystyle 2^{-k}} . Document 5::: Floyd algorithm In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation R {\displaystyle R} , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph.
epfl-collab
Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is:
['3/4', '5/12', '7/24', '1/2']
B
null
Document 1::: Precision and recall In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Document 2::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 3::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. Document 4::: Mean absolute percentage error The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula: MAPE = 1 n ∑ t = 1 n | A t − F t A t | {\displaystyle {\mbox{MAPE}}={\frac {1}{n}}\sum _{t=1}^{n}\left|{\frac {A_{t}-F_{t}}{A_{t}}}\right|} where At is the actual value and Ft is the forecast value. Their difference is divided by the actual value At. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points n. Document 5::: Average precision Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection.
epfl-collab
Which of the following is true?
['High recall implies low precision', 'High recall hurts precision', 'High precision implies low recall', 'High precision hurts recall']
D
null
Document 1::: P value 57, No 3, 171–182 (with discussion). For a concise modern statement see Chapter 10 of "All of Statistics: A Concise Course in Statistical Inference," Springer; 1st Corrected ed. 20 edition (September 17, 2004). Larry Wasserman. Document 2::: Principle of contradiction In logic, the law of non-contradiction (LNC) (also known as the law of contradiction, principle of non-contradiction (PNC), or the principle of contradiction) states that contradictory propositions cannot both be true in the same sense at the same time, e. g. the two propositions "p is the case" and "p is not the case" are mutually exclusive. Formally, this is expressed as the tautology ¬(p ∧ ¬p). The law is not to be confused with the law of excluded middle which states that at least one, "p is the case" or "p is not the case" holds. One reason to have this law is the principle of explosion, which states that anything follows from a contradiction. Document 3::: Markov property (group theory) In the mathematical subject of group theory, the Adian–Rabin theorem is a result that states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adian (1955) and, independently, Michael O. Rabin (1958). Document 4::: Hinge theorem In geometry, the hinge theorem (sometimes called the open mouth theorem) states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is given as Proposition 24 in Book I of Euclid's Elements. Document 5::: Remarks on the Foundations of Mathematics Thus it can only be true, but unprovable." Just as we can ask, " 'Provable' in what system?," so we must also ask, "'True' in what system?" "True in Russell's system" means, as was said, proved in Russell's system, and "false" in Russell's system means the opposite has been proved in Russell's system.—Now, what does your "suppose it is false" mean?
epfl-collab
The inverse document frequency of a term can increase
['by adding a document to the document collection that contains the term', 'by adding a document to the document collection that does not contain the term', 'by adding the term to a document that contains the term', 'by removing a document from the document collection that does not contain the term']
B
null
Document 1::: Inverted index In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines. Document 2::: Inverted index Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204. There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created. Document 3::: ETBLAST eTBLAST received thousands of random samples of Medline abstracts for a large-scale study. Those with the highest similarity were assessed then entered into an on-line database. The work revealed several trends including an increasing rate of duplication in the biomedical literature, according to prominent scientific journals Bioinformatics,Anaesthesia and Intensive Care, Clinical Chemistry, Urologic oncology, Nature, and Science. Document 4::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 5::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
epfl-collab
Which of the following is wrong regarding Ontologies?
['Ontologies support domain-specific vocabularies', 'Ontologies help in the integration of data expressed in different models', 'We can create more than one ontology that conceptualize the same real-world entities', 'Ontologies dictate how semi-structured data are serialized']
D
null
Document 1::: Class (knowledge representation) The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both. Document 2::: Class (knowledge representation) While extensional classes are more well-behaved and well understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class. Document 3::: Plant ontology Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education. Document 4::: Class (knowledge representation) The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional. Document 5::: Disease Ontology The Disease Ontology (DO) is a formal ontology of human disease. The Disease Ontology project is hosted at the Institute for Genome Sciences at the University of Maryland School of Medicine. The Disease Ontology project was initially developed in 2003 at Northwestern University to address the need for a purpose-built ontology that covers the full spectrum of disease concepts annotated within biomedical repositories within an ontological framework that is extensible to meet community needs.
epfl-collab
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
['R@k-1 < R@k+1', 'P@k-1 = P@k+1', 'R@k-1 = R@k+1', 'P@k-1 > P@k+1']
A
null
Document 1::: Precision and recall In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Document 2::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 3::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. Document 4::: Precision and recall More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). Document 5::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
epfl-collab
What is true regarding Fagin's algorithm?
['It never reads more than (kn)½ entries from a posting list', 'It provably returns the k documents with the largest aggregate scores', 'Posting files need to be indexed by TF-IDF weights', 'It performs a complete scan over the posting files']
B
null
Document 1::: Fagin's theorem Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines. Document 2::: Ford-Fulkerson algorithm The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson. Document 3::: Ford-Fulkerson algorithm The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path. Document 4::: Fibonacci search technique In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation). Document 5::: Faugère's F4 and F5 algorithms This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences.
epfl-collab
Which of the following is WRONG for Ontologies?
['Different information systems need to agree on the same ontology in order to interoperate.', 'They help in the integration of data expressed in different models.', 'They give the possibility to specify schemas for different domains.', 'They dictate how semi-structured data are serialized.']
D
null
Document 1::: Class (knowledge representation) While extensional classes are more well-behaved and well understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class. Document 2::: Class (knowledge representation) The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both. Document 3::: Class (knowledge representation) The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional. Document 4::: Disease Ontology The Disease Ontology (DO) is a formal ontology of human disease. The Disease Ontology project is hosted at the Institute for Genome Sciences at the University of Maryland School of Medicine. The Disease Ontology project was initially developed in 2003 at Northwestern University to address the need for a purpose-built ontology that covers the full spectrum of disease concepts annotated within biomedical repositories within an ontological framework that is extensible to meet community needs. Document 5::: Plant ontology Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
epfl-collab
What is the benefit of LDA over LSI?
['LSI is based on a model of how documents are generated, whereas LDA is not', 'LDA has better theoretical explanation, and its empirical results are in general better than LSI’s', 'LSI is sensitive to the ordering of the words in a document, whereas LDA is not', 'LDA represents semantic dimensions (topics, concepts) as weighted combinations of terms, whereas LSI does not']
B
null
Document 1::: Discriminant function analysis Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. the class label). Document 2::: Dry low emission A DLE combustor takes up more space than a SAC turbine and if the turbine is changed it can not be connected directly to existing equipment without considerable changes in the positioning of the equipment. The SAC turbine has one single concentric ring where the DLE turbine has two or three rings with premixers depending on gas turbine type. DLE technology demands an advanced control system with a large number of burners. DLE results in lower NOx emissions because the process is run with less fuel and air, the temperature is lower and combustion takes place at a lower temperature. Document 3::: Digital Differential Analyzer A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator. The primary advantages of a DDA over the conventional analog differential analyzer are greater precision of the results and the lack of drift/noise/slip/lash in the calculations. The precision is only limited by register size and the resulting accumulated rounding/truncation errors of repeated addition. Document 4::: Link Capacity Adjustment Scheme Link Capacity Adjustment Scheme or LCAS is a method to dynamically increase or decrease the bandwidth of virtual concatenated containers. The LCAS protocol is specified in ITU-T G.7042. It allows on-demand increase or decrease of the bandwidth of the virtual concatenated group in a hitless manner. This brings bandwidth-on-demand capability for data clients like Ethernet when mapped into TDM containers. Document 5::: Life cycle cost analysis The term differs slightly from Total cost of ownership analysis (TCOA). LCCA determines the most cost-effective option to purchase, run, sustain or dispose of an object or process, and TCOA is used by managers or buyers to analyze and determine the direct and indirect cost of an item.The term is used in the study of Industrial ecology (IE). The purpose of IE is to help managers make informed decisions by tracking and analyzing products, resources and wastes.
epfl-collab
Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
['in both', 'in neither of the two', 'in the index merging approach for single node machines', 'in the map-reduce approach for parallel clusters']
C
null
Document 1::: Text categorization Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. Document 2::: Thesaurus (information retrieval) In the context of information retrieval, a thesaurus (plural: "thesauri") is a form of controlled vocabulary that seeks to dictate semantic manifestations of metadata in the indexing of content objects. A thesaurus serves to minimise semantic ambiguity by ensuring uniformity and consistency in the storage and retrieval of the manifestations of content objects. ANSI/NISO Z39.19-2005 defines a content object as "any item that is to be described for inclusion in an information retrieval system, website, or other source of information". The thesaurus aids the assignment of preferred terms to convey semantic metadata associated with the content object.A thesaurus serves to guide both an indexer and a searcher in selecting the same preferred term or combination of preferred terms to represent a given subject. ISO 25964, the international standard for information retrieval thesauri, defines a thesaurus as a “controlled and structured vocabulary in which concepts are represented by terms, organized so that relationships between concepts are made explicit, and preferred terms are accompanied by lead-in entries for synonyms or quasi-synonyms.” A thesaurus is composed by at least three elements: 1-a list of words (or terms), 2-the relationship amongst the words (or terms), indicated by their hierarchical relative position (e.g. parent/broader term; child/narrower term, synonym, etc.), 3-a set of rules on how to use the thesaurus. Document 3::: Taxonomic treatment In today’s publishing, a taxonomic treatment tagis used to delimit such a section. It allows to make this section findable, accessible, interoperable and reusable FAIR data. This is implemented in the Biodiversity Literature Repository, where upon deposition of the treatment a persistent DataCite digital object identifier (DOI) is minted. Document 4::: Information Coding Classification The terms of the first three hierarchical levels were set out in German and English in Wissensorganisation. Entwicklung, Aufgabe, Anwendung, Zukunft, on pp. 82 to 100. Document 5::: Information Coding Classification It was published in 2014 and available so far only in German. In the meantime, also the French terms of the knowledge fields have been collected. Competence for maintenance and further development rests with the German Chapter of the International Society for Knowledge Organization (ISKO) e.V.
epfl-collab
Which of the following is correct regarding Crowdsourcing?
['It is applicable only for binary classification problems', 'The output of Majority Decision can be equal to the one of Expectation-Maximization', 'Random Spammers give always the same answer for every question', 'Honey Pot discovers all the types of spammers but not the sloppy workers']
B
null
Document 1::: Crowd sourcing Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model." Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Document 2::: Crowd sourcing The term crowdsourcing was coined in 2006 by two editors at Wired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing". Howe published a definition for the term in a blog post in June 2006: Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers. Document 3::: Crowd sourcing Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may be praise or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time, from experts, or from small businesses. Document 4::: Crowdsourcing as human-machine translation The use of crowdsourcing and text corpus in human-machine translation (HMT) within the last few years have become predominant in their area, in comparison to solely using machine translation (MT). There have been a few recent academic journals looking into the benefits that using crowdsourcing as a translation technique could bring to the current approach to the task and how it could help improve and make more efficient the current tools available to the public. Document 5::: Crowd Supply Crowd Supply is a crowdfunding platform based in Portland, Oregon. The platform has claimed "over twice the success rate of Kickstarter and Indiegogo", and partners with creators who use it, providing mentorship resembling a business incubator.Some see Crowd Supply's close management of projects as the solution to the fulfillment failure rate of other crowdfunding platforms. The site also serves as an online store for the inventories of successful campaigns.Notable projects from the platform include Andrew Huang's Novena, an open-source laptop.
epfl-collab
When computing PageRank iteratively, the computation ends when...
['The norm of the difference of rank vectors of two subsequent iterations falls below a predefined threshold', 'All nodes of the graph have been visited at least once', 'The difference among the eigenvalues of two subsequent iterations falls below a predefined threshold', 'The probability of visiting an unseen node falls below a predefined threshold']
A
null
Document 1::: PageRank PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. Document 2::: PageRank The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, PageRank and all associated patents have expired. Document 3::: Hilltop algorithm These are pages that are about a specific topic and have links to many non-affiliated pages on that topic. The original algorithm relied on independent directories with categorized links to sites. Results are ranked based on the match between the query and relevant descriptive text for hyperlinks on expert pages pointing to a given result page. Document 4::: Ranking By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance, making it possible for the user quickly to select the pages they are likely to want to see. Analysis of data obtained by ranking commonly requires non-parametric statistics. Document 5::: Power method In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A {\displaystyle A} , the algorithm will produce a number λ {\displaystyle \lambda } , which is the greatest (in absolute value) eigenvalue of A {\displaystyle A} , and a nonzero vector v {\displaystyle v} , which is a corresponding eigenvector of λ {\displaystyle \lambda } , that is, A v = λ v {\displaystyle Av=\lambda v} . The algorithm is also known as the Von Mises iteration.Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix A {\displaystyle A} by a vector, so it is effective for a very large sparse matrix with appropriate implementation.
epfl-collab
How does LSI querying work?
['The query vector is treated as an additional term; then cosine similarity is computed', 'The query vector is multiplied with an orthonormal matrix; then cosine similarity is computed', 'The query vector is transformed by Matrix S; then cosine similarity is computed', 'The query vector is treated as an additional document; then cosine similarity is computed']
D
null
Document 1::: Data retrieval The retrieved data may be stored in a file, printed, or viewed on the screen. A query language, like for example Structured Query Language (SQL), is used to prepare the queries. SQL is an American National Standards Institute (ANSI) standardized query language developed specifically to write database queries. Each database management system may have its own language, but most are relational. Document 2::: Query optimization Query optimization is a feature of many relational database management systems and other databases such as NoSQL and graph databases. The query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans.Generally, the query optimizer cannot be accessed directly by users: once queries are submitted to the database server, and parsed by the parser, they are then passed to the query optimizer where optimization occurs. However, some database engines allow guiding the query optimizer with hints. A query is a request for information from a database. Document 3::: Query optimization It can be as simple as "find the address of a person with Social Security number 123-45-6789," or more complex like "find the average salary of all the employed married men in California between the ages 30 to 39 who earn less than their spouses." The result of a query is generated by processing the rows in a database in a way that yields the requested information. Since database structures are complex, in most cases, and especially for not-very-simple queries, the needed data for a query can be collected from a database by accessing it in different ways, through different data-structures, and in different orders. Document 4::: Query understanding Query understanding is the process of inferring the intent of a search engine user by extracting semantic meaning from the searcher’s keywords. Query understanding methods generally take place before the search engine retrieves and ranks results. It is related to natural language processing but specifically focused on the understanding of search queries. Query understanding is at the heart of technologies like Amazon Alexa, Apple's Siri. Google Assistant, IBM's Watson, and Microsoft's Cortana. Document 5::: Query (complexity) In descriptive complexity, a query is a mapping from structures of one signature to structures of another vocabulary. Neil Immerman, in his book Descriptive Complexity, "use the concept of query as the fundamental paradigm of computation" (p. 17). Given signatures σ {\displaystyle \sigma } and τ {\displaystyle \tau } , we define the set of structures on each language, STRUC {\displaystyle {\mbox{STRUC}}} and STRUC {\displaystyle {\mbox{STRUC}}} . A query is then any mapping I: STRUC → STRUC {\displaystyle I:{\mbox{STRUC}}\to {\mbox{STRUC}}} Computational complexity theory can then be phrased in terms of the power of the mathematical logic necessary to express a given query.
epfl-collab
Suppose that an item in a leaf node N exists in every path. Which one is correct?
['For every node P that is a parent of N in the fp tree, confidence(P->N) = 1', 'N’s minimum possible support is equal to the number of paths.', 'N co-occurs with its prefix in every transaction.', 'The item N exists in every candidate set.']
B
null
Document 1::: Tree (automata theory) If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π. A path may be a finite or infinite set. If all paths of a tree are finite then the tree is called finite, otherwise infinite. Document 2::: Arborescence (graph theory) In graph theory, an arborescence is a directed graph in which, for a vertex u (called the root) and any other vertex v, there is exactly one directed path from u to v. An arborescence is thus the directed-graph form of a rooted tree, understood here as an undirected graph.Equivalently, an arborescence is a directed, rooted tree in which all edges point away from the root; a number of other equivalent characterizations exist. Every arborescence is a directed acyclic graph (DAG), but not every DAG is an arborescence. An arborescence can equivalently be defined as a rooted digraph in which the path from the root to any other vertex is unique. Document 3::: Shortest-path tree In mathematics and computer science, a shortest-path tree rooted at a vertex v of a connected, undirected graph G is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G. In connected graphs where shortest paths are well-defined (i.e. where there are no negative-length cycles), we may construct a shortest-path tree using the following algorithm: Compute dist(u), the shortest-path distance from root v to vertex u in G using Dijkstra's algorithm or Bellman–Ford algorithm. For all non-root vertices u, we can assign to u a parent vertex pu such that pu is connected to u, and that dist(pu) + edge_dist(pu,u) = dist(u). In case multiple choices for pu exist, choose pu for which there exists a shortest path from v to pu with as few edges as possible; this tie-breaking rule is needed to prevent loops when there exist zero-length cycles. Document 4::: Hamiltonian graph In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected or directed graph that visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a cycle that visits each vertex exactly once. A Hamiltonian path that starts and ends at adjacent vertices can be completed by adding one more edge to form a Hamiltonian cycle, and removing any edge from a Hamiltonian cycle produces a Hamiltonian path. Determining whether such paths and cycles exist in graphs (the Hamiltonian path problem and Hamiltonian cycle problem) are NP-complete. Document 5::: Dynamic trees By doing this operation on two distinct nodes, one can check whether they belong to the same tree.The represented forest may consist of very deep trees, so if we represent the forest as a plain collection of parent pointer trees, it might take us a long time to find the root of a given node. However, if we represent each tree in the forest as a link/cut tree, we can find which tree an element belongs to in O(log(n)) amortized time. Moreover, we can quickly adjust the collection of link/cut trees to changes in the represented forest.
epfl-collab
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
['P@k-1 = P@k+1', 'R@k-1 < R@k+', 'R@k-1 = R@k+1', 'P@k-1 > P@k+1']
B
null
Document 1::: Precision and recall In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Document 2::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 3::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. Document 4::: Precision and recall More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). Document 5::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
epfl-collab
For the number of times the apriori algorithm and the FPgrowth algorithm for association rule mining are scanning the transaction database the following is true
['apriori cannot have fewer scans than fpgrowth', 'fpgrowth and apriori can have the same number of scans', 'all three above statements are false', 'fpgrowth has always strictly fewer scans than apriori']
B
null
Document 1::: Apriori algorithm Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis. Document 2::: Affinity analysis Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events. Document 3::: Affinity analysis Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events. Document 4::: Affinity analysis The first condition or feature (A) is called antecedent and the latter (B) is known as consequent. This process is repeated until no additional frequent itemsets are found. There are two important metrics for performing the association rules mining technique: support and confidence. Document 5::: Affinity analysis The first condition or feature (A) is called antecedent and the latter (B) is known as consequent. This process is repeated until no additional frequent itemsets are found. There are two important metrics for performing the association rules mining technique: support and confidence.
epfl-collab
Given the following teleporting matrix (Ε) for nodes A, B and C:[0    ½    0][0     0    0][0    ½    1]and making no assumptions about the link matrix (R), which of the following is correct:(Reminder: columns are the probabilities to leave the respective node.)
['A random walker can never leave node A', 'A random walker can always leave node B', 'A random walker can never reach node A', 'A random walker can always leave node C']
B
null
Document 1::: Transition rate matrix In probability theory, a transition-rate matrix (also known as a Q-matrix, intensity matrix, or infinitesimal generator matrix) is an array of numbers describing the instantaneous rate at which a continuous-time Markov chain transitions between states. In a transition-rate matrix Q (sometimes written A), element qij (for i ≠ j) denotes the rate departing from i and arriving in state j. Diagonal elements qii are defined such that q i i = − ∑ j ≠ i q i j {\displaystyle q_{ii}=-\sum _{j\neq i}q_{ij}} ,and therefore the rows of the matrix sum to zero. (See condition 3 in the definition section.) Document 2::: Network probability matrix The network probability matrix describes the probability structure of a network based on the historical presence or absence of edges in a network. For example, individuals in a social network are not connected to other individuals with uniform random probability. The probability structure is much more complex. Intuitively, there are some people whom a person will communicate with or be connected more closely than others. Document 3::: Random walk closeness centrality , j ) = ( I − M − j ) − 1 e {\displaystyle H(.,j)=(I-M_{-j})^{-1}e} where H ( . , j ) {\displaystyle H(.,j)} is the vector for first passage times for a walk ending at node j, and e is an n-1 dimensional vector of ones. Mean first passage time is not symmetric, even for undirected graphs. Document 4::: Graph Laplacian For example, let e i {\textstyle e_{i}} denote the i-th standard basis vector. Then x = e i P {\textstyle x=e_{i}P} is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex i {\textstyle i} ; i.e., x j = P ( v i → v j ) {\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)} . Document 5::: Markov arrival process A Markov arrival process is defined by two matrices, D0 and D1 where elements of D0 represent hidden transitions and elements of D1 observable transitions. The block matrix Q below is a transition rate matrix for a continuous-time Markov chain. Q = . {\displaystyle Q=\left\;.} The simplest example is a Poisson process where D0 = −λ and D1 = λ where there is only one possible transition, it is observable, and occurs at rate λ. For Q to be a valid transition rate matrix, the following restrictions apply to the Di 0 ≤ i , j < ∞ 0 ≤ i , j < ∞ i ≠ j i , i < 0 ( D 0 + D 1 ) 1 = 0 {\displaystyle {\begin{aligned}0\leq _{i,j}&<\infty \\0\leq _{i,j}&<\infty \quad i\neq j\\\,_{i,i}&<0\\(D_{0}+D_{1}){\boldsymbol {1}}&={\boldsymbol {0}}\end{aligned}}}
epfl-collab
Which of the following methods does not exploit statistics on the co-occurrence of words in a text?
['Vector space retrieval\n\n\n', 'Transformers\n\n\n', 'Word embeddings\n\n\n', 'Fasttext']
A
null
Document 1::: Random indexing In Euclidean spaces, random projections are elucidated using the Johnson–Lindenstrauss lemma.The TopSig technique extends the random indexing model to produce bit vectors for comparison with the Hamming distance similarity function. It is used for improving the performance of information retrieval and document clustering. In a similar line of research, Random Manhattan Integer Indexing (RMII) is proposed for improving the performance of the methods that employ the Manhattan distance between text units. Many random indexing methods primarily generate similarity from co-occurrence of items in a corpus. Reflexive Random Indexing (RRI) generates similarity from co-occurrence and from shared occurrence with other items. Document 2::: Noisy text analytics Noisy text analytics is a process of information extraction whose goal is to automatically extract structured or semistructured information from noisy unstructured text data. While Text analytics is a growing and mature field that has great value because of the huge amounts of data being produced, processing of noisy text is gaining in importance because a lot of common applications produce noisy text data. Noisy unstructured text data is found in informal settings such as online chat, text messages, e-mails, message boards, newsgroups, blogs, wikis and web pages. Also, text produced by processing spontaneous speech using automatic speech recognition and printed or handwritten text using optical character recognition contains processing noise. Document 3::: Cosine similarity For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents.The technique is also used to measure cohesion within clusters in the field of data mining.One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data. Document 4::: Random indexing Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately. This is the original point of the random projection approach to dimension reduction first formulated as the Johnson–Lindenstrauss lemma, and locality-sensitive hashing has some of the same starting points. Random indexing, as used in representation of language, originates from the work of Pentti Kanerva on sparse distributed memory, and can be described as an incremental formulation of a random projection.It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces. Document 5::: Biomedical text mining Biomedical text mining (including biomedical natural language processing or BioNLP) refers to the methods and study of how text mining may be applied to texts and literature of the biomedical domain. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies in this field have been applied to the biomedical literature available through services such as PubMed. In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming.
epfl-collab
Which attribute gives the best split?A1PNa44b44A2PNx51y33A3PNt61j23
['A1', 'All the same', 'A3', 'A2']
C
null
Document 1::: Split (graph theory) In graph theory, a split of an undirected graph is a cut whose cut-set forms a complete bipartite graph. A graph is prime if it has no splits. The splits of a graph can be collected into a tree-like structure called the split decomposition or join decomposition, which can be constructed in linear time. This decomposition has been used for fast recognition of circle graphs and distance-hereditary graphs, as well as for other problems in graph algorithms. Splits and split decompositions were first introduced by Cunningham (1982), who also studied variants of the same notions for directed graphs. Document 2::: Iterative Dichotomiser 3 Calculate the entropy of every attribute a {\displaystyle a} of the data set S {\displaystyle S} . Partition ("split") the set S {\displaystyle S} into subsets using the attribute for which the resulting entropy after splitting is minimized; or, equivalently, information gain is maximum. Make a decision tree node containing that attribute. Recurse on subsets using the remaining attributes. Document 3::: Split (phylogenetics) A split in phylogenetics is a bipartition of a set of taxa, and the smallest unit of information in unrooted phylogenetic trees: each edge of an unrooted phylogenetic tree represents one split, and the tree can be efficiently reconstructed from its set of splits. Moreover, when given several trees, the splits occurring in more than half of these trees give rise to a consensus tree, and the splits occurring in a smaller fraction of the trees generally give rise to a consensus Split Network. Document 4::: AZ64 AZ64 or AZ64 Encoding is a data compression algorithm proprietary to Amazon Web Services.Amazon claims better compression and better speed than raw, LZO or Zstandard, when used in Amazon's Redshift service. == References == Document 5::: 842 (compression algorithm) 842, 8-4-2, or EFT is a data compression algorithm. It is a variation on Lempel–Ziv compression with a limited dictionary length. With typical data, 842 gives 80 to 90 percent of the compression of LZ77 with much faster throughput and less memory use. Hardware implementations also provide minimal use of energy and minimal chip area. 842 compression can be used for virtual memory compression, for databases — especially column-oriented stores, and when streaming input-output — for example to do backups or to write to log files.
epfl-collab
Suppose that q is density reachable from p. The chain of points that ensure this relationship are {t,u,g,r} Which one is FALSE?
['p has to be a core point', 'q has to be a border point', 'p and q will also be density-connected', '{t,u,g,r} have to be all core points.']
B
null
Document 1::: Density point In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} , the "density" of A is 0 or 1 at almost every point in R n {\displaystyle \mathbb {R} ^{n}} . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as d ε ( x ) = μ ( A ∩ B ε ( x ) ) μ ( B ε ( x ) ) {\displaystyle d_{\varepsilon }(x)={\frac {\mu (A\cap B_{\varepsilon }(x))}{\mu (B_{\varepsilon }(x))}}} where Bε denotes the closed ball of radius ε centered at x. Lebesgue's density theorem asserts that for almost every point x of A the density d ( x ) = lim ε → 0 d ε ( x ) {\displaystyle d(x)=\lim _{\varepsilon \to 0}d_{\varepsilon }(x)} exists and is equal to 0 or 1. Document 2::: Density point The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion. Document 3::: Density point In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and μ(Rn \ A) > 0, then there are always points of Rn where the density is neither 0 nor 1. For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. Document 4::: Contiguity (probability theory) By the aforementioned logic, this statement is also false. It is possible however that each of the measures Qn be absolutely continuous with respect to Pn, while the sequence Qn not being contiguous with respect to Pn. The fundamental Radon–Nikodym theorem for absolutely continuous measures states that if Q is absolutely continuous with respect to P, then Q has density with respect to P, denoted as ƒ = dQ⁄dP, such that for any measurable set A Q ( A ) = ∫ A f d P , {\displaystyle Q(A)=\int _{A}f\,\mathrm {d} P,\,} which is interpreted as being able to "reconstruct" the measure Q from knowing the measure P and the derivative ƒ. A similar result exists for contiguous sequences of measures, and is given by the Le Cam's third lemma. Document 5::: Limiting density of discrete points In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Shannon for differential entropy. It was formulated by Edwin Thompson Jaynes to address defects in the initial definition of differential entropy.
epfl-collab
In User-Based Collaborative Filtering, which of the following is correct, assuming that all the ratings are positive?
['Pearson Correlation Coefficient and Cosine Similarity have the same value range, but can return different similarity ranking for the users', 'Pearson Correlation Coefficient and Cosine Similarity have different value range, but return the same similarity ranking for the users', 'If the variance of the ratings of one of the users is 0, then their Cosine Similarity is not computable', 'If the ratings of two users have both variance equal to 0, then their Cosine Similarity is maximized']
D
null
Document 1::: Precision and recall For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity. Document 2::: Evaluation measures (information retrieval) Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. Document 3::: Evaluation measures (information retrieval) Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection. Document 4::: Graph Laplacian Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization. Document 5::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
epfl-collab
The term frequency of a term is normalized
['by the maximal frequency of the term in the document collection', 'by the maximal frequency of all terms in the document', 'by the maximal term frequency of any document in the collection', 'by the maximal frequency of any term in the vocabulary']
B
null
Document 1::: Cycles per sample In digital signal processing (DSP), a normalized frequency is a ratio of a variable frequency (f) and a constant frequency associated with a system (such as a sampling rate, fs). Some software applications require normalized inputs and produce normalized outputs, which can be re-scaled to physical units when necessary. Mathematical derivations are usually done in normalized units, relevant to a wide range of applications. Document 2::: Normalization constant In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probability density function, which gives the standard normal distribution. In Bayes' theorem, a normalizing constant is used to ensure that the sum of all possible hypotheses equals 1. Other uses of normalizing constants include making the value of a Legendre polynomial at 1 and in the orthogonality of orthonormal functions. A similar concept has been used in areas other than probability, such as for polynomials. Document 3::: Cross-spectral density {\displaystyle \Delta t\to 0.} But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency) Document 4::: Normalization constant In probability theory, a normalizing constant is a constant by which an everywhere non-negative function must be multiplied so the area under its graph is 1, e.g., to make it a probability density function or a probability mass function. Document 5::: Cumulative frequency analysis Frequency analysis is the analysis of how often, or how frequently, an observed phenomenon occurs in a certain range. Frequency analysis applies to a record of length N of observed data X1, X2, X3 . .
epfl-collab
Which is an appropriate method for fighting skewed distributions of class labels in classification?
['Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels', 'Use leave-one-out cross validation', 'Generate artificial data points for the most frequent classes', 'Include an over-proportional number of samples from the larger class']
B
null
Document 1::: Multi-label classification In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (more than two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y. Document 2::: Multiclass classifier In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). While many classification algorithms (notably multinomial logistic regression) naturally permit the use of more than two classes, some are by nature binary algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies. Multiclass classification should not be confused with multi-label classification, where multiple labels are to be predicted for each instance. Document 3::: Loss functions for classification In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given X {\displaystyle {\mathcal {X}}} as the space of all possible inputs (usually X ⊂ R d {\displaystyle {\mathcal {X}}\subset \mathbb {R} ^{d}} ), and Y = { − 1 , 1 } {\displaystyle {\mathcal {Y}}=\{-1,1\}} as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function f: X → Y {\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}} which best predicts a label y {\displaystyle y} for a given input x → {\displaystyle {\vec {x}}} . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same x → {\displaystyle {\vec {x}}} to generate different y {\displaystyle y} . As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as I = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y {\displaystyle I=\displaystyle \int _{{\mathcal {X}}\times {\mathcal {Y}}}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy} where V ( f ( x → ) , y ) {\displaystyle V(f({\vec {x}}),y)} is a given loss function, and p ( x → , y ) {\displaystyle p({\vec {x}},y)} is the probability density function of the process that generated the data, which can equivalently be written as p ( x → , y ) = p ( y ∣ x → ) p ( x → ) . Document 4::: Classification algorithm This category is about statistical classification algorithms. For more information, see Statistical classification. Document 5::: One-class classification In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class, although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes, motor failure prediction, or the operational status of a nuclear plant as 'normal': In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known. While many of the above approaches focus on the case of removing a small number of outliers or anomalies, one can also learn the other extreme, where the single class covers a small coherent subset of the data, using an information bottleneck approach.
epfl-collab
Thang, Jeremie and Tugrulcan have built their own search engines. For a query Q, they got precision scores of 0.6, 0.7, 0.8  respectively. Their F1 scores (calculated by same parameters) are same. Whose search engine has a higher recall on Q?
['Thang', 'Jeremie', 'We need more information', 'Tugrulcan']
A
null
Document 1::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. Document 2::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 3::: Average precision Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection. Document 4::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. Document 5::: Uncertain inference Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
epfl-collab
When compressing the adjacency list of a given URL, a reference list
['Is chosen from neighboring URLs that can be reached in a small number of hops', 'All of the above', 'May contain URLs not occurring in the adjacency list of the given URL', 'Lists all URLs not contained in the adjacency list of given URL']
C
null
Document 1::: Adjacency list In graph theory and computer science, an adjacency list is a collection of unordered lists used to represent a finite graph. Each unordered list within an adjacency list describes the set of neighbors of a particular vertex in the graph. This is one of several commonly used representations of graphs for use in computer programs. Document 2::: Compressed data structure The term compressed data structure arises in the computer science subfields of algorithms, data structures, and theoretical computer science. It refers to a data structure whose operations are roughly as fast as those of a conventional data structure for the problem, but whose size can be substantially smaller. The size of the compressed data structure is typically highly dependent upon the information entropy of the data being represented. Important examples of compressed data structures include the compressed suffix array and the FM-index,both of which can represent an arbitrary text of characters T for pattern matching. Document 3::: Compressed data structure In other words, they simultaneously provide a compressed and quickly searchable representation of the text T. They represent a substantial space improvement over the conventional suffix tree and suffix array, which occupy many times more space than the size of T. They also support searching for arbitrary patterns, as opposed to the inverted index, which can support only word-based searches. In addition, inverted indexes do not have the self-indexing feature. An important related notion is that of a succinct data structure, which uses space roughly equal to the information-theoretic minimum, which is a worst-case notion of the space needed to represent the data. Document 4::: Compressed data structure Given any input pattern P, they support the operation of finding if and where P appears in T. The search time is proportional to the sum of the length of pattern P, a very slow-growing function of the length of the text T, and the number of reported matches. The space they occupy is roughly equal to the size of the text T in entropy-compressed form, such as that obtained by Prediction by Partial Matching or gzip. Moreover, both data structures are self-indexing, in that they can reconstruct the text T in a random access manner, and thus the underlying text T can be discarded. Document 5::: Succinct data structure In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, insofar as the size of the stored or encoded data similarly depends upon the specific content of the data itself.
epfl-collab
Data being classified as unstructured or structured depends on the:
['Degree of abstraction', 'Level of human involvement', 'Type of physical storage', 'Amount of data ']
A
null
Document 1::: Unstructured data Unstructured data (or unstructured information) is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents. In 1998, Merrill Lynch said "unstructured data comprises the vast majority of data found in an organization, some estimates run as high as 80%." Document 2::: Semi-structured data Semi-structured data is a form of structured data that does not obey the tabular structure of data models associated with relational databases or other forms of data tables, but nonetheless contains tags or other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known as self-describing structure. In semi-structured data, the entities belonging to the same class may have different attributes even though they are grouped together, and the attributes' order is not important. Semi-structured data are increasingly occurring since the advent of the Internet where full-text documents and databases are not the only forms of data anymore, and different applications need a medium for exchanging information. In object-oriented databases, one often finds semi-structured data. Document 3::: Structured data analysis (statistics) Structured data analysis is the statistical data analysis of structured data. This can arise either in the form of an a priori structure such as multiple-choice questionnaires or in situations with the need to search for structure that fits the given data, either exactly or approximately. This structure can then be used for making comparisons, predictions, manipulations etc. Document 4::: Structured data analysis (systems analysis) Structured data analysis (SDA) is a method for analysing the flow of information within an organization using data flow diagrams. It was originally developed by IBM for systems analysis in electronic data processing, although it has now been adapted for use to describe the flow of information in any kind of project or organization, particularly in the construction industry where the nodes could be departments, contractors, customers, managers, workers etc. Document 5::: Structure mining Structure mining or structured data mining is the process of finding and extracting useful information from semi-structured data sets. Graph mining, sequential pattern mining and molecule mining are special cases of structured data mining.
epfl-collab
Suppose you have a search engine that retrieves the top 100 documents and achieves 90% precision and 20% recall. You modify the search engine to retrieve the top 200 and mysteriously, the precision stays the same. Which one is CORRECT?
['The F-score stays the same', 'This is not possible', 'The number of relevant documents is 450', 'The recall becomes 10%']
C
null
Document 1::: Precision and recall In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Document 2::: Precision and recall Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. Document 3::: Average precision Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection. Document 4::: Precision and recall More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). Document 5::: Average precision Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
epfl-collab
In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed:
['That the class label correlates with the feature', 'That the class label is independent of the feature', 'That the class labels depends on the feature', 'None of the above']
B
null
Document 1::: 5 sigma In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is or, by using summation notation, If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this case, the standard deviation will be Document 2::: Sparse Distributed Memory The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d). The normal distribution F with mean n/2 and standard deviation n / 2 {\displaystyle {\sqrt {n}}/2} is a good approximation to it: N(d) = Pr{d(x, y) ≤ d} ≅ F{(d − n / 2)/ n / 4 {\displaystyle {\sqrt {n/4}}} } Tendency to orthogonalityAn outstanding property of N is that most of it lies at approximately the mean (indifference) distance n/2 from a point (and its complement). In other words, most of the space is nearly orthogonal to any given point, and the larger n is, the more pronounced is this effect. Document 3::: Sparse distributed memory The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d). The normal distribution F with mean n/2 and standard deviation n / 2 {\displaystyle {\sqrt {n}}/2} is a good approximation to it: N(d) = Pr{d(x, y) ≤ d} ≅ F{(d − n / 2)/ n / 4 {\displaystyle {\sqrt {n/4}}} } Tendency to orthogonalityAn outstanding property of N is that most of it lies at approximately the mean (indifference) distance n/2 from a point (and its complement). In other words, most of the space is nearly orthogonal to any given point, and the larger n is, the more pronounced is this effect. Document 4::: Probit function If we consider the familiar fact that the standard normal distribution places 95% of probability between −1.96 and 1.96, and is symmetric around zero, it follows that Φ ( − 1.96 ) = 0.025 = 1 − Φ ( 1.96 ) . {\displaystyle \Phi (-1.96)=0.025=1-\Phi (1.96).\,\!} The probit function gives the 'inverse' computation, generating a value of a standard normal random variable, associated with specified cumulative probability. Continuing the example, probit ⁡ ( 0.025 ) = − 1.96 = − probit ⁡ ( 0.975 ) {\displaystyle \operatorname {probit} (0.025)=-1.96=-\operatorname {probit} (0.975)} .In general, Φ ( probit ⁡ ( p ) ) = p {\displaystyle \Phi (\operatorname {probit} (p))=p} and probit ⁡ ( Φ ( z ) ) = z . {\displaystyle \operatorname {probit} (\Phi (z))=z.} Document 5::: Chi distribution If Z 1 , … , Z k {\displaystyle Z_{1},\ldots ,Z_{k}} are k {\displaystyle k} independent, normally distributed random variables with mean 0 and standard deviation 1, then the statistic Y = ∑ i = 1 k Z i 2 {\displaystyle Y={\sqrt {\sum _{i=1}^{k}Z_{i}^{2}}}} is distributed according to the chi distribution. The chi distribution has one positive integer parameter k {\displaystyle k} , which specifies the degrees of freedom (i.e. the number of random variables Z i {\displaystyle Z_{i}} ). The most familiar examples are the Rayleigh distribution (chi distribution with two degrees of freedom) and the Maxwell–Boltzmann distribution of the molecular speeds in an ideal gas (chi distribution with three degrees of freedom).
epfl-collab
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
['The cost of predicting a word is linear in the lengths of the text preceding the word.', 'The label of one word is predicted based on all the previous labels', 'An HMM model can be built using words enhanced with morphological features as input.', 'The cost of learning the model is quadratic in the lengths of the text.']
C
null
Document 1::: Sequence labeling Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field. Document 2::: Sequence labeling Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field. Document 3::: Maximum-entropy Markov model In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction. Document 4::: Semantic analysis (machine learning) A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it. Document 5::: Text categorization Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science.
epfl-collab
10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has:
['5% support and 10% confidence', '5% support and 50% confidence', '10% support and 50% confidence', '10% support and 10% confidence']
B
null
Document 1::: Inclusion-exclusion principle In combinatorics, a branch of mathematics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as | A ∪ B | = | A | + | B | − | A ∩ B | {\displaystyle |A\cup B|=|A|+|B|-|A\cap B|} where A and B are two finite sets and |S | indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A, B and C is given by | A ∪ B ∪ C | = | A | + | B | + | C | − | A ∩ B | − | A ∩ C | − | B ∩ C | + | A ∩ B ∩ C | {\displaystyle |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|} This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. Document 2::: Affinity analysis Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events. Document 3::: Affinity analysis Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events. Document 4::: Subset inclusion In mathematics, set A is a subset of a set B if all elements of A are also elements of B; B is then a superset of A. It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B. The relationship of one set being a subset of another is called inclusion (or sometimes containment). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B. A k-subset is a subset with k elements. The subset relation defines a partial order on sets. In fact, the subsets of a given set form a Boolean algebra under the subset relation, in which the join and meet are given by intersection and union, and the subset relation itself is the Boolean inclusion relation. Document 5::: Item tree analysis Other typical examples are questionnaires where the items are statements to which subjects can agree (1) or disagree (0). Depending on the content of the items it is possible that the response of a subject to an item j determines her or his responses to other items. It is, for example, possible that each subject who agrees to item j will also agree to item i. In this case we say that item j implies item i (short i → j {\displaystyle i\rightarrow j} ). The goal of an ITA is to uncover such deterministic implications from the data set D.
epfl-collab
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
['When computing the emission probabilities, a word can be replaced by a morphological feature (e.g., the number of uppercase first characters)', 'HMMs cannot predict the label of a word that appears only in the test set', 'If the smoothing parameter λ is equal to 1, the emission probabilities for all the words in the test set will be equal', 'The label of one word is predicted based on all the previous labels']
A
null
Document 1::: Sequence labeling Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field. Document 2::: Sequence labeling Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field. Document 3::: Maximum-entropy Markov model In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction. Document 4::: Semantic analysis (machine learning) A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it. Document 5::: Text categorization Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science.
epfl-collab
A basic statement in RDF would be expressed in the relational data model by a table
['with three attributes', 'with one attribute', 'with two attributes', 'cannot be expressed in the relational data model']
C
null
Document 1::: Relational Model The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database. The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries. Document 2::: Relational Model Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles. Document 3::: Logical schema A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology. Document 4::: SPARQL SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery. Document 5::: Oracle NoSQL Database NoSQL Database supports tabular model. Each row is identified by a unique key, and has a value, of arbitrary length, which is interpreted by the application. The application can manipulate (insert, delete, update, read) a single row in a transaction. The application can also perform an iterative, non-transactional scan of all the rows in the database.
epfl-collab
Which of the following statements is wrong regarding RDF?
['The object value of a type statement corresponds to a table name in SQL', 'Blank nodes in RDF graphs correspond to the special value NULL in SQL', 'RDF graphs can be encoded as SQL databases', 'An RDF statement would be expressed in SQL as a tuple in a table']
B
null
Document 1::: SPARQL SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery. Document 2::: List of SPARQL implementations This list shows notable triplestores, APIs, and other storage engines that have implemented the W3C SPARQL standard. Amazon Neptune Apache Marmotta AllegroGraph Eclipse RDF4J Apache Jena with ARQ Blazegraph Cray Urika-GD IBM Db2 - Removed in v11.5. KAON2 MarkLogic Mulgara NitrosBase Ontotext GraphDB Oracle DB Enterprise Spatial & Graph RDFLib Python library Redland / Redstore Virtuoso Document 3::: NGSI-LD The NGSI-LD information model represents Context Information as entities that have properties and relationships to other entities. It is derived from property graphs, with semantics formally defined on the basis of RDF and the semantic web framework. It can be serialized using JSON-LD. Every entity and relationship is given a unique IRI reference as identifier, making the corresponding data exportable as linked data datasets. The -LD suffix denotes this affiliation to the linked data universe. Document 4::: Knowledge discovery Knowledge extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL (data warehouse), the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data. The RDB2RDF W3C group is currently standardizing a language for extraction of resource description frameworks (RDF) from relational databases. Another popular example for knowledge extraction is the transformation of Wikipedia into structured data and also the mapping to existing knowledge (see DBpedia and Freebase). Document 5::: Global Research Identifier Database The 30th public release of GRID was on 27 August 2018, and the database contained 89,506 entries. It is available in the Resource Description Framework (RDF) specification as linked data, and can therefore be linked to other data. Containing 14,401 relationships, GRID models two types of relationships: a parent-child relationship that defines a subordinate association, and a related relationship that describes other associationsIn December 2016, Digital Science released GRID under a Creative Commons CC0 licence — without restriction under copyright or database law.The database is available for download as a ZIP archive, which includes the entire database in JSON and CSV file formats.From all the sources which it draws information, including funding datasets, Digital Science claims that GRID covers 92% of institutions.
epfl-collab
The number of non-zero entries in a column of a term-document matrix indicates:
['how relevant a term is for a document ', 'how many terms of the vocabulary a document contains', 'none of the other responses is correct', 'how often a term of the vocabulary occurs in a document']
B
null
Document 1::: Zero matrix In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m × n {\displaystyle m\times n} matrices, and is denoted by the symbol O {\displaystyle O} or 0 {\displaystyle 0} followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are 0 1 , 1 = , 0 2 , 2 = , 0 2 , 3 = . {\displaystyle 0_{1,1}={\begin{bmatrix}0\end{bmatrix}},\ 0_{2,2}={\begin{bmatrix}0&0\\0&0\end{bmatrix}},\ 0_{2,3}={\begin{bmatrix}0&0&0\\0&0&0\end{bmatrix}}.\ } Document 2::: Pascal matrix The non-zero elements of a Pascal matrix are given by the binomial coefficients: such that the indices i, j start at 0, and ! denotes the factorial. Document 3::: Zernike polynomials Applications often involve linear algebra, where an integral over a product of Zernike polynomials and some other factor builds a matrix elements. To enumerate the rows and columns of these matrices by a single index, a conventional mapping of the two indices n and l to a single index j has been introduced by Noll. The table of this association Z n l → Z j {\displaystyle Z_{n}^{l}\rightarrow Z_{j}} starts as follows (sequence A176988 in the OEIS). j = n ( n + 1 ) 2 + | l | + { 0 , l > 0 ∧ n ≡ { 0 , 1 } ( mod 4 ) ; 0 , l < 0 ∧ n ≡ { 2 , 3 } ( mod 4 ) ; 1 , l ≥ 0 ∧ n ≡ { 2 , 3 } ( mod 4 ) ; 1 , l ≤ 0 ∧ n ≡ { 0 , 1 } ( mod 4 ) . Document 4::: Boolean model of information retrieval An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Letbe the set of all such index terms. A document is any subset of T {\displaystyle T} . Letbe the set of all documents. Document 5::: Logical matrix A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.
epfl-collab
What is TRUE regarding Fagin's algorithm?
['It performs a complete scan over the posting files', 'It provably returns the k documents with the largest aggregate scores', 'Posting files need to be indexed by TF-IDF weights', 'It never reads more than (kn)1⁄2 entries from a posting list']
B
null
Document 1::: Fagin's theorem Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines. Document 2::: Ford-Fulkerson algorithm The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson. Document 3::: Ford-Fulkerson algorithm The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path. Document 4::: Fibonacci search technique In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation). Document 5::: Faugère's F4 and F5 algorithms This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences.
epfl-collab
A false negative in sampling can only occur for itemsets with support smaller than
['p*s', 'p*m', 'the threshold s', 'None of the above']
D
null
Document 1::: Multiple comparisons problem However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, the expected number of incorrect rejections (also known as false positives or Type I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%. The multiple comparisons problem also applies to confidence intervals. Document 2::: Precision and recall Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting a hypothesis-testing approach from statistics, in which, in this case, the null hypothesis is that a given item is irrelevant (i.e., not a dog), absence of type I and type II errors (i.e., perfect specificity and sensitivity of 100% each) corresponds respectively to perfect precision (no false positive) and perfect recall (no false negative). Document 3::: False positive rate In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification). The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio. Document 4::: Type I error rate In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative. Document 5::: Precision and recall For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
35