source_dataset
stringclasses 1
value | question
stringlengths 6
1.87k
| choices
stringlengths 20
1.02k
| answer
stringclasses 4
values | rationale
float64 | documents
stringlengths 1.01k
5.9k
|
---|---|---|---|---|---|
epfl-collab | Which of the following scheduler policies are preemptive? | ['RR (Round Robin)', 'FIFO (First In, First Out)', 'STCF (Shortest Time to Completion First)', 'SJF (Shortest Job First)'] | C | null | Document 1:::
Kernel preemption
In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks).
Document 2:::
O(1) scheduler
An O(1) scheduler (pronounced "O of 1 scheduler", "Big O of 1 scheduler", or "constant time scheduler") is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system. This is an improvement over previously used O(n) schedulers, which schedule processes in an amount of time that scales linearly based on the amounts of inputs. In the realm of real-time operating systems, deterministic execution is key, and an O(1) scheduler is able to provide scheduling services with a fixed upper-bound on execution times. The O(1) scheduler was used in Linux releases 2.6.0 thru 2.6.22 (2003-2007), at which point it was superseded by the Completely Fair Scheduler.
Document 3:::
Micro-Controller Operating Systems
Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing.
Document 4:::
Max-min fairness
In communication networks, multiplexing and the division of scarce resources, max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any participant necessarily results in the decrease in the allocation of some other participant with an equal or smaller allocation. In best-effort statistical multiplexing, a first-come first-served (FCFS) scheduling policy is often used. The advantage with max-min fairness over FCFS is that it results in traffic shaping, meaning that an ill-behaved flow, consisting of large data packets or bursts of many packets, will only punish itself and not other flows. Network congestion is consequently to some extent avoided. Fair queuing is an example of a max-min fair packet scheduling algorithm for statistical multiplexing and best-effort networks, since it gives scheduling priority to users that have achieved lowest data rate since they became active. In case of equally sized data packets, round-robin scheduling is max-min fair.
Document 5:::
Priority inversion
In computer science, priority inversion is a scenario in scheduling in which a high priority task is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task. |
epfl-collab | Which of the following are correct implementation for acquire function ? Assume 0 means UNLOCKED and 1 means LOCKED. Initially l->locked = 0. | ['c \n void acquire(struct lock *l)\n {\n if(l->locked == 0) \n return;\n }', 'c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }', 'c \n void acquire(struct lock *l)\n {\n for(;;)\n if(cas(&l->locked, 1, 0) == 1)\n return;\n }', 'c \n void acquire(struct lock *l)\n {\n if(cas(&l->locked, 0, 1) == 0)\n return;\n }'] | B | null | Document 1:::
Test-and-set
A lock can be built using an atomic test-and-set instruction as follows: This code assumes that the memory location was initialized to 0 at some point prior to the first test-and-set. The calling process obtains the lock if the old value was 0, otherwise the while-loop spins waiting to acquire the lock. This is called a spinlock.
Document 2:::
Phase-locked loop ranges
{\displaystyle \exists t>T_{\text{lock}}:\left|\theta _{\Delta }(0)-\theta _{\Delta }(t)\right|\geq 2\pi .} Here, sometimes, the limit of the difference or the maximum of the difference is considered Definition of lock-in range. If the loop is in a locked state, then after an abrupt change of ω Δ free {\displaystyle \omega _{\Delta }^{\text{free}}} free within a lock-in range | ω Δ free | ≤ ω ℓ {\displaystyle \left|\omega _{\Delta }^{\text{free}}\right|\leq \omega _{\ell }} , the PLL acquires lock without cycle slipping. Here ω ℓ {\displaystyle \omega _{\ell }} is called lock-in frequency. == References ==
Document 3:::
Phase-locked loop range
Also called acquisition range, capture range.Assume that the loop power supply is initially switched off and then at t = 0 {\displaystyle t=0} the power is switched on, and assume that the initial frequency difference is sufficiently large. The loop may not lock within one beat note, but the VCO frequency will be slowly tuned toward the reference frequency (acquisition process). This effect is also called a transient stability. The pull-in range is used to name such frequency deviations that make the acquisition process possible (see, for example, explanations in Gardner (1966, p.
Document 4:::
Phase-locked loop ranges
Such long acquisition process is called cycle slipping. If difference between initial and final phase deviation is larger than 2 π {\displaystyle 2\pi } , we say that cycle slipping takes place. ∃ t > T lock: | θ Δ ( 0 ) − θ Δ ( t ) | ≥ 2 π .
Document 5:::
Phase-locked loop ranges
The terms hold-in range, pull-in range (acquisition range), and lock-in range are widely used by engineers for the concepts of frequency deviation ranges within which phase-locked loop-based circuits can achieve lock under various additional conditions. |
epfl-collab | In which of the following cases does JOS acquire the big kernel lock? | ['Processor traps in user mode', 'Switching from kernel mode to user mode', 'Processor traps in kernel mode', 'Initialization of application processor'] | A | null | Document 1:::
Java Optimized Processor
Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA.
Document 2:::
System Contention Scope
In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. == References ==
Document 3:::
Kernel preemption
In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks).
Document 4:::
Atomic lock
In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications.
Document 5:::
O(1) scheduler
An O(1) scheduler (pronounced "O of 1 scheduler", "Big O of 1 scheduler", or "constant time scheduler") is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system. This is an improvement over previously used O(n) schedulers, which schedule processes in an amount of time that scales linearly based on the amounts of inputs. In the realm of real-time operating systems, deterministic execution is key, and an O(1) scheduler is able to provide scheduling services with a fixed upper-bound on execution times. The O(1) scheduler was used in Linux releases 2.6.0 thru 2.6.22 (2003-2007), at which point it was superseded by the Completely Fair Scheduler. |
epfl-collab | Assume a user program executes following tasks. Select all options that will use a system call. | ['Read the user\'s input "Hello world" from the keyboard.', 'Send "Hello world" to another machine via Network Interface Card.', 'Write "Hello world" to a file.', 'Encrypt "Hello world" by AES.'] | A | null | Document 1:::
System call
In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, accessing a hard disk drive or accessing the device's camera), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system. In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, privileged system code also issues system calls.
Document 2:::
Choice (command)
In computing, choice is a command that allows for batch files to prompt the user to select one item from a set of single-character choices. It is available in a number of operating system command-line shells.
Document 3:::
Systems programming
Systems programming, or system programming, is the activity of programming computer system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications).Systems programming requires a great degree of hardware awareness. Its goal is to achieve efficient use of available resources, either because the software itself is performance critical or because even small efficiency improvements directly transform into significant savings of time or money.
Document 4:::
Process Explorer
For example, it provides a means to list or search for named resources that are held by a process or all processes. This can be used to track down what is holding a file open and preventing its use by another program. As another example, it can show the command lines used to start a program, allowing otherwise identical processes to be distinguished. Like Task Manager, it can show a process that is maxing out the CPU, but unlike Task Manager it can show which thread (with the callstack) is using the CPU – information that is not even available under a debugger.
Document 5:::
Invoke operator (computer programming)
Programs for a computer may be executed in a batch process without human interaction or a user may type commands in an interactive session of an interpreter. In this case, the "commands" are simply program instructions, whose execution is chained together. The term run is used almost synonymously. A related meaning of both "to run" and "to execute" refers to the specific action of a user starting (or launching or invoking) a program, as in "Please run the application." |
epfl-collab | What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling? | ['Bugs in one process can cause a machine to freeze up', 'It can lead to poor response time for processes', 'It can lead to starvation especially for those real-time tasks', 'Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.'] | C | null | Document 1:::
Least slack time scheduling
This algorithm is also known as least laxity first. Its most common use is in embedded systems, especially those with multiple processors. It imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do not have an affinity to a certain processor. This is what lends it a suitability to embedded systems.
Document 2:::
Two-level scheduling
If this variable is not considered resource starvation may occur and a process may not complete at all. Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process. Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster.
Document 3:::
Two-level scheduling
Exactly how it selects processes is up to the implementation of the higher-level scheduler. A compromise has to be made involving the following variables: Response time: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessly long.
Document 4:::
Nondeterministic algorithm
In computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithm's behaviors depends on a random number generator.
Document 5:::
Micro-Controller Operating Systems
Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing. |
epfl-collab | Select valid answers about file descriptors (FD): | ['FD is usually used as an argument for read and write.', 'The value of FD is unique for every file in the operating system.', 'FD is constructed by hashing the filename.', 'FDs are preserved after fork() and can be used in the new process pointing to the original files.'] | A | null | Document 1:::
Data descriptor
In computing, a data descriptor is a structure containing information that describes data. Data descriptors may be used in compilers, as a software structure at run time in languages like Ada or PL/I, or as a hardware structure in some computers such as Burroughs large systems. Data descriptors are typically used at run-time to pass argument information to called subroutines. HP OpenVMS and Multics have system-wide language-independent standards for argument descriptors. Descriptors are also used to hold information about data that is only fully known at run-time, such as a dynamically allocated array.
Document 2:::
Compound File Binary Format
Compound File Binary Format (CFBF), also called Compound File, Compound Document format, or Composite Document File V2 (CDF), is a compound document file format for storing numerous files and streams within a single file on a disk. CFBF is developed by Microsoft and is an implementation of Microsoft COM Structured Storage.Microsoft has opened the format for use by others and it is now used in a variety of programs from Microsoft Word and Microsoft Access to Business Objects. It also forms the basis of the Advanced Authoring Format.
Document 3:::
Disk Filing System
Each filename can be up to seven letters long, plus one letter for the directory in which the file is stored.The DFS is remarkable in that unlike most filing systems, there was no single vendor or implementation. The original DFS was written by Acorn, who continued to maintain their own codebase, but various disc drive vendors wrote their own implementations. Companies who wrote their own DFS implementations included Cumana, Solidisk, Opus and Watford Electronics.
Document 4:::
Segment descriptor
In memory addressing for Intel x86 computer architectures, segment descriptors are a part of the segmentation unit, used for translating a logical address to a linear address. Segment descriptors describe the memory segment referred to in the logical address. The segment descriptor (8 bytes long in 80286 and later) contains the following fields: A segment base address The segment limit which specifies the segment size Access rights byte containing the protection mechanism information Control bits
Document 5:::
Fire Dynamics Simulator
It models vegetative fuel either by explicitly defining the volume of the vegetation or, for surface fuels such as grass, by assuming uniform fuel at the air-ground boundary.FDS is a Fortran program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files. Smokeview is a companion program that reads FDS output files and produces animations on the computer screen. Smokeview has a simple menu-driven interface, while FDS does not. However, there are various third-party programs that have been developed to generate the text file containing the input parameters needed by FDS. |
epfl-collab | Suppose a file system used only for reading immutable files in random fashion. What is the best block allocation strategy? | ['Index allocation with Hash-table', 'Index allocation with B-tree', 'Linked-list allocation', 'Continuous allocation'] | D | null | Document 1:::
Block size (data storage and transmission)
Some newer file systems, such as Btrfs and FreeBSD UFS2, attempt to solve this through techniques called block suballocation and tail merging. Other file systems such as ZFS support variable block sizes.Block storage is normally abstracted by a file system or database management system (DBMS) for use by applications and end users.
Document 2:::
Block size (data storage and transmission)
Most file systems are based on a block device, which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size. This leads to space inefficiency due to internal fragmentation, since file lengths are often not integer multiples of block size, and thus the last block of a file may remain partially empty. This will create slack space.
Document 3:::
Extent (file systems)
Extent-based file systems can also eliminate most of the metadata overhead of large files that would traditionally be taken up by the block-allocation tree. But because the savings are small compared to the amount of stored data (for all file sizes in general) but make up a large portion of the metadata (for large files), the overall benefits in storage efficiency and performance are slight.In order to resist fragmentation, several extent-based file systems do allocate-on-flush. Many modern fault-tolerant file systems also do copy-on-write, although that increases fragmentation.
Document 4:::
Block-level storage
Block-level storage is a concept in cloud-hosted data persistence where cloud services emulate the behaviour of a traditional block device, such as a physical hard drive.Storage in such services is organised as blocks. This emulates the type of behaviour seen in traditional disks or tape storage through storage virtualization. Blocks are identified by an arbitrary and assigned identifier by which they may be stored and retrieved, but this has no obvious meaning in terms of files or documents. A file system must be applied on top of the block-level storage to map 'files' onto a sequence of blocks.
Document 5:::
Delayed allocation
Allocate-on-flush (also called delayed allocation) is a file system feature implemented in HFS+, XFS, Reiser4, ZFS, Btrfs, and ext4. The feature also closely resembles an older technique that Berkeley's UFS called "block reallocation". When blocks must be allocated to hold pending writes, disk space for the appended data is subtracted from the free-space counter, but not actually allocated in the free-space bitmap. Instead, the appended data are held in memory until they must be flushed to storage due to memory pressure, when the kernel decides to flush dirty buffers, or when the application performs the Unix sync system call, for example. |
epfl-collab | Which of the following operations would switch the user program from user space to kernel space? | ['Calling sin() in math library.', 'Jumping to an invalid address.', 'Invoking read() syscall.', 'Dividing integer by 0.'] | D | null | Document 1:::
OS kernel
In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel.
Document 2:::
Disk swapping
In order to use a function of the program not loaded into memory, the user would have to first remove the data disk, then insert the program disk. When the user then wanted to save their file, the reverse operation would have to be performed.
Document 3:::
OS kernel
It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit. The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application software or other less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space.
Document 4:::
OS kernel
The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader).
Document 5:::
OS kernel
There are different kernel architecture designs. Monolithic kernels run entirely in a single address space with the CPU executing in supervisor mode, mainly for speed. Microkernels run most but not all of their services in user space, like user processes do, mainly for resilience and modularity. |
epfl-collab | Which flag prevents user programs from reading and writing kernel data? | ['PTE_P', 'PTE_W', 'PTE_U', 'PTE_D'] | C | null | Document 1:::
OS kernel
In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel.
Document 2:::
OS kernel
The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader).
Document 3:::
Capsicum (Unix)
A process can also receive capabilities via Unix sockets. These file descriptors not only control access to the file system, but also to other devices like the network sockets. Flags are also used to control more fine-grained access like reads and writes.
Document 4:::
OS kernel
It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit. The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application software or other less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space.
Document 5:::
CPU modes
CPU modes (also called processor modes, CPU states, CPU privilege levels and other names) are operating modes for the central processing unit of some computer architectures that place restrictions on the type and scope of operations that can be performed by certain processes being run by the CPU. This design allows the operating system to run with more privileges than application software.Ideally, only highly trusted kernel code is allowed to execute in the unrestricted mode; everything else (including non-supervisory portions of the operating system) runs in a restricted mode and must use a system call (via interrupt) to request the kernel perform on its behalf any operation that could damage or compromise the system, making it impossible for untrusted programs to alter or damage other programs (or the computing system itself). In practice, however, system calls take time and can hurt the performance of a computing system, so it is not uncommon for system designers to allow some time-critical software (especially device drivers) to run with full kernel privileges. Multiple modes can be implemented—allowing a hypervisor to run multiple operating system supervisors beneath it, which is the basic design of many virtual machine systems available today. |
epfl-collab | In which of the following cases does the TLB need to be flushed? | ['Inserting a new page into the page table for kernel.', 'Inserting a new page into the page table for a user-space application.', 'Changing the read/write permission bit in the page table.', 'Deleting a page from the page table.'] | D | null | Document 1:::
Translation look-aside buffer
A translation lookaside buffer (TLB) is a memory cache that stores the recent translations of virtual memory to physical memory. It is used to reduce the time taken to access a user memory location. It can be called an address-translation cache. It is a part of the chip's memory-management unit (MMU).
Document 2:::
Translation look-aside buffer
A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that utilizes paged or segmented virtual memory. The TLB is sometimes implemented as content-addressable memory (CAM).
Document 3:::
Translation look-aside buffer
The CAM search key is the virtual address, and the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match quickly and the retrieved physical address can be used to access memory. This is called a TLB hit.
Document 4:::
Delayed allocation
Allocate-on-flush (also called delayed allocation) is a file system feature implemented in HFS+, XFS, Reiser4, ZFS, Btrfs, and ext4. The feature also closely resembles an older technique that Berkeley's UFS called "block reallocation". When blocks must be allocated to hold pending writes, disk space for the appended data is subtracted from the free-space counter, but not actually allocated in the free-space bitmap. Instead, the appended data are held in memory until they must be flushed to storage due to memory pressure, when the kernel decides to flush dirty buffers, or when the application performs the Unix sync system call, for example.
Document 5:::
Translation look-aside buffer
If the requested address is not in the TLB, it is a miss, and the translation proceeds by looking up the page table in a process called a page walk. The page walk is time-consuming when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB. The PowerPC 604, for example, has a two-way set-associative TLB for data loads and stores. Some processors have different instruction and data address TLBs. |
epfl-collab | In x86, select all synchronous exceptions? | ['Divide error', 'Page Fault', 'Timer', 'Keyboard'] | A | null | Document 1:::
Triple fault
On the x86 computer architecture, a triple fault is a special kind of exception generated by the CPU when an exception occurs while the CPU is trying to invoke the double fault exception handler, which itself handles exceptions occurring while trying to invoke a regular exception handler. x86 processors beginning with the 80286 will cause a shutdown cycle to occur when a triple fault is encountered. This typically causes the motherboard hardware to initiate a CPU reset, which, in turn, causes the whole computer to reboot.
Document 2:::
Segmentation violation
Processes can in some cases install a custom signal handler, allowing them to recover on their own, but otherwise the OS default signal handler is used, generally causing abnormal termination of the process (a program crash), and sometimes a core dump. Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access.
Document 3:::
Segmentation violation
In computing, a segmentation fault (often shortened to segfault) or access violation is a fault, or failure condition, raised by hardware with memory protection, notifying an operating system (OS) the software has attempted to access a restricted area of memory (a memory access violation). On standard x86 computers, this is a form of general protection fault. The operating system kernel will, in response, usually perform some corrective action, generally passing the fault on to the offending process by sending the process a signal.
Document 4:::
Exception handling syntax
Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it. Most commonly, error handling uses a try... block, and errors are created via a throw statement, but there is significant variation in naming and syntax.
Document 5:::
Transactional Synchronization Extensions
Transactional Synchronization Extensions (TSX), also called Transactional Synchronization Extensions New Instructions (TSX-NI), is an extension to the x86 instruction set architecture (ISA) that adds hardware transactional memory support, speeding up execution of multi-threaded software through lock elision. According to different benchmarks, TSX/TSX-NI can provide around 40% faster applications execution in specific workloads, and 4–5 times more database transactions per second (TPS).TSX/TSX-NI was documented by Intel in February 2012, and debuted in June 2013 on selected Intel microprocessors based on the Haswell microarchitecture. Haswell processors below 45xx as well as R-series and K-series (with unlocked multiplier) SKUs do not support TSX/TSX-NI. In August 2014, Intel announced a bug in the TSX/TSX-NI implementation on current steppings of Haswell, Haswell-E, Haswell-EP and early Broadwell CPUs, which resulted in disabling the TSX/TSX-NI feature on affected CPUs via a microcode update.In 2016, a side-channel timing attack was found by abusing the way TSX/TSX-NI handles transactional faults (i.e. page faults) in order to break kernel address space layout randomization (KASLR) on all major operating systems. In 2021, Intel released a microcode update that disabled the TSX/TSX-NI feature on CPU generations from Skylake to Coffee Lake, as a mitigation for discovered security issues.Support for TSX/TSX-NI emulation is provided as part of the Intel Software Development Emulator. There is also experimental support for TSX/TSX-NI emulation in a QEMU fork. |
epfl-collab | Which of the execution of an application are possible on a single-core machine? | ['Both concurrent and parallel execution', 'Parallel execution', 'Neither concurrent or parallel execution', 'Concurrent execution'] | D | null | Document 1:::
Superscalar execution
A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic unit.
Document 2:::
Many-core processing unit
Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading. Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to host processors).The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation.
Document 3:::
Single cycle processor
A single cycle processor is a processor that carries out one instruction in a single clock cycle.
Document 4:::
Many-core processing unit
A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely.
Document 5:::
VISC architecture
This form of multithreading can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs. |
epfl-collab | In an x86 multiprocessor system with JOS, select all the correct options. Assume every Env has a single thread. | ['One Env could run on two different processors at different times.', 'Two Envs could run on the same processor simultaneously.', 'Two Envs could run on two different processors simultaneously.', 'One Env could run on two different processors simultaneously.'] | C | null | Document 1:::
Java Optimized Processor
Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA.
Document 2:::
MultiProcessor Specification
The MultiProcessor Specification (MPS) for the x86 architecture is an open standard describing enhancements to both operating systems and firmware, which will allow them to work with x86-compatible processors in a multi-processor configuration. MPS covers Advanced Programmable Interrupt Controller (APIC) architectures. Version 1.1 of the specification was released on April 11, 1994. Version 1.4 of the specification was released on July 1, 1995, which added extended configuration tables to improve support for multiple PCI bus configurations and improve expandability.
Document 3:::
System Contention Scope
In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. == References ==
Document 4:::
Cellular multiprocessing
Cellular multiprocessing is a multiprocessing computing architecture designed initially for Intel central processing units from Unisys, a worldwide information technology consulting company. It consists of the partitioning of processors into separate computing environments running different operating systems. Providing up to 32 processors that are crossbar connected to 64GB of memory and 96 PCI cards, a CMP system provides mainframe-like architecture using Intel CPUs. CMP supports Windows NT and Windows 2000 Server, AIX, Novell NetWare and UnixWare and can be run as one large SMP system or multiple systems with variant operating systems.
Document 5:::
Cray J90
All input/output in a J90 system was handled by an IOS (Input/Output Subsystem) called IOS Model V. The IOS-V was based on the VME64 bus and SPARC I/O processors (IOPs) running the VxWorks RTOS. The IOS was programmed to emulate the IOS Model E, used in the larger Cray Y-MP systems, in order to minimize changes in the UNICOS operating system. By using standard VME boards, a wide variety of commodity peripherals could be used. |
epfl-collab | In JOS, suppose a value is passed between two Envs. What is the minimum number of executed system calls? | ['2', '4', '1', '3'] | A | null | Document 1:::
Virtual Execution System
The Virtual Execution System (VES) is a run-time system of the Common Language Infrastructure CLI which provides an environment for executing managed code. It provides direct support for a set of built-in data types, defines a hypothetical machine with an associated machine model and state, a set of control flow constructs, and an exception handling model. To a large extent, the purpose of the VES is to provide the support required to execute the Common Intermediate Language CIL instruction set.
Document 2:::
Instruction path length
In computer performance, the instruction path length is the number of machine code instructions required to execute a section of a computer program. The total path length for the entire program could be deemed a measure of the algorithm's performance on a particular computer hardware. The path length of a simple conditional instruction would normally be considered as equal to 2, one instruction to perform the comparison and another to take a branch if the particular condition is satisfied.
Document 3:::
Java Optimized Processor
Java Optimized Processor (JOP) is a Java processor, an implementation of Java virtual machine (JVM) in hardware. JOP is free hardware under the GNU General Public License, version 3. The intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA.
Document 4:::
Io (programming language)
Io uses actors for concurrency. Remarkable features of Io are its minimal size and openness to using external code resources. Io is executed by a small, portable virtual machine.
Document 5:::
Linear Code Sequence and Jump
An LCSAJ is a software code path fragment consisting of a sequence of code (a linear code sequence) followed by a control flow Jump, and consists of the following three items: the start of the linear sequence of executable statements the end of the linear sequence the target line to which control flow is transferred at the end of the linear sequence.Unlike (maximal) basic blocks, LCSAJs can overlap with each other because a jump (out) may occur in the middle of an LCSAJ, while it isn't allowed in the middle of a basic block. In particular, conditional jumps generate overlapping LCSAJs: one which runs through to where the condition evaluates to false and another that ends at the jump when the condition evaluates to true (the example given further below in this article illustrates such an occurrence). According to a monograph from 1986, LCSAJs were typically four times larger than basic blocks.The formal definition of a LCSAJ can be given in terms of basic blocks as follows: a sequence of one or more consecutively numbered basic blocks, p, (p+1), ..., q, of a code unit, followed by a control flow jump either out of the code or to a basic block numbered r, where r≠(q+1), and either p=1 or there exists a control flow jump to block p from some other block in the unit. (A basic block to which such a control flow jump can be made is referred to as a target of the jump.) According to Jorgensen's 2013 textbook, outside Great Britain and ISTQB literature, the same notion is called DD-path. |
epfl-collab | What strace tool does? | ['To remove wildcards from the string.', 'It prints out system calls for given program. These systems calls are called only for that particular instance of the program.', 'To trace a symlink. I.e. to find where the symlink points to.', 'It prints out system calls for given program. These system calls are always called when executing the program.'] | B | null | Document 1:::
Strace
strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. The operation of strace is made possible by the kernel feature known as ptrace. Some Unix-like systems provide other diagnostic tools similar to strace, such as truss.
Document 2:::
Malware research
The executed binary code is traced using strace or more precise taint analysis to compute data-flow dependencies among system calls. The result is a directed graph G = ( V , E ) {\displaystyle G=(V,E)} such that nodes are system calls, and edges represent dependencies. For example, ( s , t ) ∈ E {\displaystyle (s,t)\in E} if a result returned by system call s {\displaystyle s} (either directly as a result or indirectly through output parameters) is later used as a parameter of system call t {\displaystyle t} .
Document 3:::
DTrace
DTrace is a comprehensive dynamic tracing framework originally created by Sun Microsystems for troubleshooting kernel and application problems on production systems in real time. Originally developed for Solaris, it has since been released under the free Common Development and Distribution License (CDDL) in OpenSolaris and its descendant illumos, and has been ported to several other Unix-like systems. DTrace can be used to get a global overview of a running system, such as the amount of memory, CPU time, filesystem and network resources used by the active processes.
Document 4:::
Synthesis Toolkit
The Synthesis Toolkit (STK) is an open source API for real time audio synthesis with an emphasis on classes to facilitate the development of physical modelling synthesizers. It is written in C++ and is written and maintained by Perry Cook at Princeton University and Gary Scavone at McGill University. It contains both low-level synthesis and signal processing classes (oscillators, filters, etc.) and higher-level instrument classes which contain examples of most of the currently available physical modelling algorithms in use today.
Document 5:::
Synthesis Toolkit
STK is free software, but a number of its classes, particularly some physical modelling algorithms, are covered by patents held by Stanford University and Yamaha.The STK is used widely in creating software synthesis applications. Versions of the STK instrument classes have been integrated into ChucK, Csound, Real-Time Cmix, Max/MSP (as part of PeRColate), SuperCollider and FAUST. It has been ported to SymbianOS and iOS as well. |
epfl-collab | What is a good distance metric to be used when you want to compute the similarity between documents independent of their length?A penalty will be applied for any incorrect answers. | ['Chi-squared distance', 'Manhattan distance', 'Euclidean distance', 'Cosine similarity'] | D | null | Document 1:::
Similarity measure
In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms. Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model. In machine learning, common kernel functions such as the RBF kernel can be viewed as similarity functions.
Document 2:::
Cosine similarity
Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from 0 → 1 {\displaystyle 0\to 1} , since the term frequencies cannot be negative. This remains true when using TF-IDF weights.
Document 3:::
Jaro–Winkler distance
In computer science and statistics, the Jaro–Winkler similarity is a string metric measuring an edit distance between two sequences. It is a variant of the Jaro distance metric metric (1989, Matthew A. Jaro) proposed in 1990 by William E. Winkler.The Jaro–Winkler distance uses a prefix scale p {\displaystyle p} which gives more favourable ratings to strings that match from the beginning for a set prefix length ℓ {\displaystyle \ell } . The higher the Jaro–Winkler distance for two strings is, the less similar the strings are. The score is normalized such that 0 means an exact match and 1 means there is no similarity. The original paper actually defined the metric in terms of similarity, so the distance is defined as the inversion of that value (distance = 1 − similarity). Although often referred to as a distance metric, the Jaro–Winkler distance is not a metric in the mathematical sense of that term because it does not obey the triangle inequality.
Document 4:::
Information distance
Information distance is the distance between two finite objects (represented as computer files) expressed as the number of bits in the shortest program which transforms one object into the other one or vice versa on a universal computer. This is an extension of Kolmogorov complexity. The Kolmogorov complexity of a single finite object is the information in that object; the information distance between a pair of finite objects is the minimum information required to go from one object to the other or vice versa.
Document 5:::
Edit distance
In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to transform one string into the other. Edit distances find applications in natural language processing, where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T. Different definitions of an edit distance use different sets of string operations. Levenshtein distance operations are the removal, insertion, or substitution of a character in the string. Being the most common metric, the term Levenshtein distance is often used interchangeably with edit distance. |
epfl-collab | For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Which of the following associations can be considered as illustrative examples for inflectional
morphology (with here the simplifying assumption that canonical forms are restricted to the roots
only)? | ['(hypothesis, hypotheses)', '(to go, went)', '(speaking, talking)', '(activate, action)'] | A | null | Document 1:::
Inflection
In linguistic morphology, inflection (or inflexion) is a process of word formation in which a word is modified to express different grammatical categories such as tense, case, voice, aspect, person, number, gender, mood, animacy, and definiteness. The inflection of verbs is called conjugation, and one can refer to the inflection of nouns, adjectives, adverbs, pronouns, determiners, participles, prepositions and postpositions, numerals, articles, etc., as declension. An inflection expresses grammatical categories with affixation (such as prefix, suffix, infix, circumfix, and transfix), apophony (as Indo-European ablaut), or other modifications. For example, the Latin verb ducam, meaning "I will lead", includes the suffix -am, expressing person (first), number (singular), and tense-mood (future indicative or present subjunctive).
Document 2:::
Inflection
Analytic languages that do not make use of derivational morphemes, such as Standard Chinese, are said to be isolating. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known as concord or agreement.
Document 3:::
Inflection
Words that are never subject to inflection are said to be invariant; for example, the English verb must is an invariant item: it never takes a suffix or changes form to signify a different grammatical category. Its categories can be determined only from its context. Languages that seldom make use of inflection, such as English, are said to be analytic.
Document 4:::
Inflection
For example, in "the man jumps", "man" is a singular noun, so "jump" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection are synthetic languages. These can be highly inflected (such as Latin, Greek, Biblical Hebrew, and Sanskrit), or slightly inflected (such as English, Dutch, Persian). Languages that are so inflected that a sentence can consist of a single highly inflected word (such as many Native American languages) are called polysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such as Finnish, are known as agglutinative languages, while languages in which a single inflection can convey multiple grammatical roles (such as both nominative case and plural, as in Latin and German) are called fusional.
Document 5:::
Inflection
The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the word lead is not inflected for any of person, number, or tense; it is simply the bare form of a verb. The inflected form of a word often contains both one or more free morphemes (a unit of meaning which can stand by itself as a word), and one or more bound morphemes (a unit of meaning which cannot stand alone as a word). |
epfl-collab | Which of the following statements are true? | ['A $k$-nearest-neighbor classifier is sensitive to outliers.', 'k-nearest-neighbors cannot be used for regression.', 'The more training examples, the more accurate the prediction of a $k$-nearest-neighbor classifier.', 'Training a $k$-nearest-neighbor classifier takes more computational time than applying it / using it for prediction.'] | C | null | Document 1:::
Markov property (group theory)
In the mathematical subject of group theory, the Adian–Rabin theorem is a result that states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adian (1955) and, independently, Michael O. Rabin (1958).
Document 2:::
Rice theorem
In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every partial computable function, nor false for every partial computable function.
Document 3:::
Remarks on the Foundations of Mathematics
Thus it can only be true, but unprovable." Just as we can ask, " 'Provable' in what system?," so we must also ask, "'True' in what system?" "True in Russell's system" means, as was said, proved in Russell's system, and "false" in Russell's system means the opposite has been proved in Russell's system.—Now, what does your "suppose it is false" mean?
Document 4:::
Löwenheim–Skolem theorem
As a consequence, first-order theories are unable to control the cardinality of their infinite models. The (downward) Löwenheim–Skolem theorem is one of the two key properties, along with the compactness theorem, that are used in Lindström's theorem to characterize first-order logic. In general, the Löwenheim–Skolem theorem does not hold in stronger logics such as second-order logic.
Document 5:::
Binary relations
The statement ( x , y ) ∈ R {\displaystyle (x,y)\in R} reads "x is R-related to y" and is denoted by xRy. The domain of definition or active domain of R is the set of all x such that xRy for at least one y. The codomain of definition, active codomain, image or range of R is the set of all y such that xRy for at least one x. The field of R is the union of its domain of definition and its codomain of definition.When X = Y , {\displaystyle X=Y,} a binary relation is called a homogeneous relation (or endorelation). To emphasize the fact that X and Y are allowed to be different, a binary relation is also called a heterogeneous relation.In a binary relation, the order of the elements is important; if x ≠ y {\displaystyle x\neq y} then yRx can be true or false independently of xRy. For example, 3 divides 9, but 9 does not divide 3. |
epfl-collab | In Text Representation learning, which of the following statements is correct? | ['FastText performs unsupervised learning of word vectors.', 'If you fix all word vectors, and only train the remaining parameters, then FastText in the two-class case reduces to being just a linear classifier.', 'Learning GloVe vectors can be done using SGD in a streaming fashion, by streaming through the input text only once.', 'Every recommender systems algorithm for learning a matrix factorization $\\boldsymbol{W} \\boldsymbol{Z}^{\\top}$ approximating the observed entries in least square sense does also apply to learn GloVe word vectors.'] | D | null | Document 1:::
Sequence labeling
In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose the globally best set of labels for the entire sequence at once.
Document 2:::
Sequence labeling
In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose the globally best set of labels for the entire sequence at once.
Document 3:::
Feature learning
In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process.
Document 4:::
Feature learning
In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process.
Document 5:::
Feature learning
In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model and it must produce the ground truth label as the correct answer. This can be leveraged to generate feature representations with the model which result in high label prediction accuracy. |
epfl-collab | Consider a matrix factorization problem of the form $\mathbf{X}=\mathbf{W Z}^{\top}$ to obtain an item-user recommender system where $x_{i j}$ denotes the rating given by $j^{\text {th }}$ user to the $i^{\text {th }}$ item . We use Root mean square error (RMSE) to gauge the quality of the factorization obtained. Select the correct option. | ['Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from scratch to generate robust ratings for the user-item pairs containing this item.', 'For obtaining a robust factorization of a matrix $\\mathbf{X}$ with $D$ rows and $N$ elements where $N \\ll D$, the latent dimension $\\mathrm{K}$ should lie somewhere between $D$ and $N$.', 'None of the other options are correct.', 'Regularization terms for $\\mathbf{W}$ and $\\mathbf{Z}$ in the form of their respective Frobenius norms are added to the RMSE so that the resulting objective function becomes convex.'] | C | null | Document 1:::
Maximum inner-product search
Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommendation algorithms and machine learning.Formally, for a database of vectors x i {\displaystyle x_{i}} defined over a set of labels S {\displaystyle S} in an inner product space with an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } defined on it, MIPS search can be defined as the problem of determining a r g m a x i ∈ S ⟨ x i , q ⟩ {\displaystyle {\underset {i\in S}{\operatorname {arg\,max} }}\ \langle x_{i},q\rangle } for a given query q {\displaystyle q} . Although there is an obvious linear-time implementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search.Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to a nearest neighbor search (NNS) problem in which maximizing the inner product is equivalent to minimizing the corresponding distance metric in the NNS problem. Like other forms of NNS, MIPS algorithms may be approximate or exact.MIPS search is used as part of DeepMind's RETRO algorithm.
Document 2:::
Matrix factorization (algebra)
(1) For S = C ] {\displaystyle S=\mathbb {C} ]} and f = x n {\displaystyle f=x^{n}} there is a matrix factorization d 0: S ⇄ S: d 1 {\displaystyle d_{0}:S\rightleftarrows S:d_{1}} where d 0 = x i , d 1 = x n − i {\displaystyle d_{0}=x^{i},d_{1}=x^{n-i}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . (2) If S = C ] {\displaystyle S=\mathbb {C} ]} and f = x y + x z + y z {\displaystyle f=xy+xz+yz} , then there is a matrix factorization d 0: S 2 ⇄ S 2: d 1 {\displaystyle d_{0}:S^{2}\rightleftarrows S^{2}:d_{1}} where d 0 = d 1 = {\displaystyle d_{0}={\begin{bmatrix}z&y\\x&-x-y\end{bmatrix}}{\text{ }}d_{1}={\begin{bmatrix}x+y&y\\x&-z\end{bmatrix}}}
Document 3:::
LU factorization
Let A be a square matrix. An LU factorization refers to the factorization of A, with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix L and an upper triangular matrix U: A = L U . {\displaystyle A=LU.} In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero.
Document 4:::
Iterative proportional fitting
We have also the entropy maximization, information loss minimization (or cross-entropy) or RAS which consists of factoring the matrix rows to match the specified row totals, then factoring its columns to match the specified column totals; each step usually disturbs the previous step’s match, so these steps are repeated in cycles, re-adjusting the rows and columns in turn, until all specified marginal totals are satisfactorily approximated. However, all algorithms give the same solution. In three- or more-dimensional cases, adjustment steps are applied for the marginals of each dimension in turn, the steps likewise repeated in cycles.
Document 5:::
Factorization
Factorization may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix L with all diagonal entries equal to one, an upper triangular matrix U, and a permutation matrix P; this is a matrix formulation of Gaussian elimination. |
epfl-collab | You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet? | ['Because I should have used only one layer.', 'Because I should have used more layers.', 'Because it is almost impossible to train a network with 10 layers without a supercomputer.', 'Because we use exactly the same scheme.'] | D | null | Document 1:::
Learning rule
Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations. The learning rule is one of the factors which decides how fast or how accurately the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning: Unsupervised learning Supervised learning Reinforcement learning
Document 2:::
Learning rule
Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations. The learning rule is one of the factors which decides how fast or how accurately the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning: Unsupervised learning Supervised learning Reinforcement learning
Document 3:::
Odds algorithm
In decision theory, the odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds strategy, and the importance of the odds strategy lies in its optimality, as explained below. The odds algorithm applies to a class of problems called last-success problems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event").
Document 4:::
Machine learning
Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning.ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods.
Document 5:::
Logic learning machine
Logic learning machine (LLM) is a machine learning method based on the generation of intelligible rules. LLM is an efficient implementation of the Switching Neural Network (SNN) paradigm, developed by Marco Muselli, Senior Researcher at the Italian National Research Council CNR-IEIIT in Genoa. LLM has been employed in many different sectors, including the field of medicine (orthopedic patient classification, DNA micro-array analysis and Clinical Decision Support Systems ), financial services and supply chain management. |
epfl-collab | Which of the following is correct regarding Louvain algorithm? | ['Modularity is always maximal for the communities found at the top level of the community hierarchy', 'Clique is the only topology of nodes where the algorithm detects the same communities, independently of the starting point', 'If n cliques of the same order are connected cyclically with n-1 edges, then the algorithm will always detect the same communities, independently of the starting point', 'It creates a hierarchy of communities with a common root'] | C | null | Document 1:::
Suurballe's algorithm
In theoretical computer science and network routing, Suurballe's algorithm is an algorithm for finding two disjoint paths in a nonnegatively-weighted directed graph, so that both paths connect the same pair of vertices and have minimum total length. The algorithm was conceived by John W. Suurballe and published in 1974. The main idea of Suurballe's algorithm is to use Dijkstra's algorithm to find one path, to modify the weights of the graph edges, and then to run Dijkstra's algorithm a second time.
Document 2:::
Pohlig–Hellman algorithm
In group theory, the Pohlig–Hellman algorithm, sometimes credited as the Silver–Pohlig–Hellman algorithm, is a special-purpose algorithm for computing discrete logarithms in a finite abelian group whose order is a smooth integer. The algorithm was introduced by Roland Silver, but first published by Stephen Pohlig and Martin Hellman (independent of Silver).
Document 3:::
Dijkstra–Scholten algorithm
The Dijkstra–Scholten algorithm (named after Edsger W. Dijkstra and Carel S. Scholten) is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Dijkstra and Scholten in 1980.First, consider the case of a simple process graph which is a tree. A distributed computation which is tree-structured is not uncommon.
Document 4:::
Freivalds' algorithm
Freivalds' algorithm (named after Rūsiņš Mārtiņš Freivalds) is a probabilistic randomized algorithm used to verify matrix multiplication. Given three n × n matrices A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} , a general problem is to verify whether A × B = C {\displaystyle A\times B=C} . A naïve algorithm would compute the product A × B {\displaystyle A\times B} explicitly and compare term by term whether this product equals C {\displaystyle C} . However, the best known matrix multiplication algorithm runs in O ( n 2.3729 ) {\displaystyle O(n^{2.3729})} time. Freivalds' algorithm utilizes randomization in order to reduce this time bound to O ( n 2 ) {\displaystyle O(n^{2})} with high probability. In O ( k n 2 ) {\displaystyle O(kn^{2})} time the algorithm can verify a matrix product with probability of failure less than 2 − k {\displaystyle 2^{-k}} .
Document 5:::
Floyd algorithm
In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation R {\displaystyle R} , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. |
epfl-collab | Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is: | ['3/4', '5/12', '7/24', '1/2'] | B | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved.
Document 2:::
Precision and recall
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
Document 3:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 4:::
Mean absolute percentage error
The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula: MAPE = 1 n ∑ t = 1 n | A t − F t A t | {\displaystyle {\mbox{MAPE}}={\frac {1}{n}}\sum _{t=1}^{n}\left|{\frac {A_{t}-F_{t}}{A_{t}}}\right|} where At is the actual value and Ft is the forecast value. Their difference is divided by the actual value At. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points n.
Document 5:::
Average precision
Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection. |
epfl-collab | Which of the following is true? | ['High recall implies low precision', 'High recall hurts precision', 'High precision implies low recall', 'High precision hurts recall'] | D | null | Document 1:::
P value
57, No 3, 171–182 (with discussion). For a concise modern statement see Chapter 10 of "All of Statistics: A Concise Course in Statistical Inference," Springer; 1st Corrected ed. 20 edition (September 17, 2004). Larry Wasserman.
Document 2:::
Principle of contradiction
In logic, the law of non-contradiction (LNC) (also known as the law of contradiction, principle of non-contradiction (PNC), or the principle of contradiction) states that contradictory propositions cannot both be true in the same sense at the same time, e. g. the two propositions "p is the case" and "p is not the case" are mutually exclusive. Formally, this is expressed as the tautology ¬(p ∧ ¬p). The law is not to be confused with the law of excluded middle which states that at least one, "p is the case" or "p is not the case" holds. One reason to have this law is the principle of explosion, which states that anything follows from a contradiction.
Document 3:::
Markov property (group theory)
In the mathematical subject of group theory, the Adian–Rabin theorem is a result that states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adian (1955) and, independently, Michael O. Rabin (1958).
Document 4:::
Hinge theorem
In geometry, the hinge theorem (sometimes called the open mouth theorem) states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is given as Proposition 24 in Book I of Euclid's Elements.
Document 5:::
Remarks on the Foundations of Mathematics
Thus it can only be true, but unprovable." Just as we can ask, " 'Provable' in what system?," so we must also ask, "'True' in what system?" "True in Russell's system" means, as was said, proved in Russell's system, and "false" in Russell's system means the opposite has been proved in Russell's system.—Now, what does your "suppose it is false" mean? |
epfl-collab | The inverse document frequency of a term can increase | ['by adding a document to the document collection that contains the term', 'by adding a document to the document collection that does not contain the term', 'by adding the term to a document that contains the term', 'by removing a document from the document collection that does not contain the term'] | B | null | Document 1:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines.
Document 2:::
Inverted index
Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204. There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Document 3:::
ETBLAST
eTBLAST received thousands of random samples of Medline abstracts for a large-scale study. Those with the highest similarity were assessed then entered into an on-line database. The work revealed several trends including an increasing rate of duplication in the biomedical literature, according to prominent scientific journals Bioinformatics,Anaesthesia and Intensive Care, Clinical Chemistry, Urologic oncology, Nature, and Science.
Document 4:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
Document 5:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. |
epfl-collab | Which of the following is wrong regarding Ontologies? | ['Ontologies support domain-specific vocabularies', 'Ontologies help in the integration of data expressed in different models', 'We can create more than one ontology that conceptualize the same real-world entities', 'Ontologies dictate how semi-structured data are serialized'] | D | null | Document 1:::
Class (knowledge representation)
The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both.
Document 2:::
Class (knowledge representation)
While extensional classes are more well-behaved and well understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class.
Document 3:::
Plant ontology
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
Document 4:::
Class (knowledge representation)
The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional.
Document 5:::
Disease Ontology
The Disease Ontology (DO) is a formal ontology of human disease. The Disease Ontology project is hosted at the Institute for Genome Sciences at the University of Maryland School of Medicine. The Disease Ontology project was initially developed in 2003 at Northwestern University to address the need for a purpose-built ontology that covers the full spectrum of disease concepts annotated within biomedical repositories within an ontological framework that is extensible to meet community needs. |
epfl-collab | In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)? | ['R@k-1 < R@k+1', 'P@k-1 = P@k+1', 'R@k-1 = R@k+1', 'P@k-1 > P@k+1'] | A | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved.
Document 2:::
Precision and recall
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
Document 3:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 4:::
Precision and recall
More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned).
Document 5:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. |
epfl-collab | What is true regarding Fagin's algorithm? | ['It never reads more than (kn)½ entries from a posting list', 'It provably returns the k documents with the largest aggregate scores', 'Posting files need to be indexed by TF-IDF weights', 'It performs a complete scan over the posting files'] | B | null | Document 1:::
Fagin's theorem
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
Document 2:::
Ford-Fulkerson algorithm
The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson.
Document 3:::
Ford-Fulkerson algorithm
The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path.
Document 4:::
Fibonacci search technique
In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation).
Document 5:::
Faugère's F4 and F5 algorithms
This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. |
epfl-collab | Which of the following is WRONG for Ontologies? | ['Different information systems need to agree on the same ontology in order to interoperate.', 'They help in the integration of data expressed in different models.', 'They give the possibility to specify schemas for different domains.', 'They dictate how semi-structured data are serialized.'] | D | null | Document 1:::
Class (knowledge representation)
While extensional classes are more well-behaved and well understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class.
Document 2:::
Class (knowledge representation)
The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both.
Document 3:::
Class (knowledge representation)
The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional.
Document 4:::
Disease Ontology
The Disease Ontology (DO) is a formal ontology of human disease. The Disease Ontology project is hosted at the Institute for Genome Sciences at the University of Maryland School of Medicine. The Disease Ontology project was initially developed in 2003 at Northwestern University to address the need for a purpose-built ontology that covers the full spectrum of disease concepts annotated within biomedical repositories within an ontological framework that is extensible to meet community needs.
Document 5:::
Plant ontology
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education. |
epfl-collab | What is the benefit of LDA over LSI? | ['LSI is based on a model of how documents are generated, whereas LDA is not', 'LDA has better theoretical explanation, and its empirical results are in general better than LSI’s', 'LSI is sensitive to the ordering of the words in a document, whereas LDA is not', 'LDA represents semantic dimensions (topics, concepts) as weighted combinations of terms, whereas LSI does not'] | B | null | Document 1:::
Discriminant function analysis
Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. the class label).
Document 2:::
Dry low emission
A DLE combustor takes up more space than a SAC turbine and if the turbine is changed it can not be connected directly to existing equipment without considerable changes in the positioning of the equipment. The SAC turbine has one single concentric ring where the DLE turbine has two or three rings with premixers depending on gas turbine type. DLE technology demands an advanced control system with a large number of burners. DLE results in lower NOx emissions because the process is run with less fuel and air, the temperature is lower and combustion takes place at a lower temperature.
Document 3:::
Digital Differential Analyzer
A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator. The primary advantages of a DDA over the conventional analog differential analyzer are greater precision of the results and the lack of drift/noise/slip/lash in the calculations. The precision is only limited by register size and the resulting accumulated rounding/truncation errors of repeated addition.
Document 4:::
Link Capacity Adjustment Scheme
Link Capacity Adjustment Scheme or LCAS is a method to dynamically increase or decrease the bandwidth of virtual concatenated containers. The LCAS protocol is specified in ITU-T G.7042. It allows on-demand increase or decrease of the bandwidth of the virtual concatenated group in a hitless manner. This brings bandwidth-on-demand capability for data clients like Ethernet when mapped into TDM containers.
Document 5:::
Life cycle cost analysis
The term differs slightly from Total cost of ownership analysis (TCOA). LCCA determines the most cost-effective option to purchase, run, sustain or dispose of an object or process, and TCOA is used by managers or buyers to analyze and determine the direct and indirect cost of an item.The term is used in the study of Industrial ecology (IE). The purpose of IE is to help managers make informed decisions by tracking and analyzing products, resources and wastes. |
epfl-collab | Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important | ['in both', 'in neither of the two', 'in the index merging approach for single node machines', 'in the map-reduce approach for parallel clusters'] | C | null | Document 1:::
Text categorization
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science.
Document 2:::
Thesaurus (information retrieval)
In the context of information retrieval, a thesaurus (plural: "thesauri") is a form of controlled vocabulary that seeks to dictate semantic manifestations of metadata in the indexing of content objects. A thesaurus serves to minimise semantic ambiguity by ensuring uniformity and consistency in the storage and retrieval of the manifestations of content objects. ANSI/NISO Z39.19-2005 defines a content object as "any item that is to be described for inclusion in an information retrieval system, website, or other source of information". The thesaurus aids the assignment of preferred terms to convey semantic metadata associated with the content object.A thesaurus serves to guide both an indexer and a searcher in selecting the same preferred term or combination of preferred terms to represent a given subject. ISO 25964, the international standard for information retrieval thesauri, defines a thesaurus as a “controlled and structured vocabulary in which concepts are represented by terms, organized so that relationships between concepts are made explicit, and preferred terms are accompanied by lead-in entries for synonyms or quasi-synonyms.” A thesaurus is composed by at least three elements: 1-a list of words (or terms), 2-the relationship amongst the words (or terms), indicated by their hierarchical relative position (e.g. parent/broader term; child/narrower term, synonym, etc.), 3-a set of rules on how to use the thesaurus.
Document 3:::
Taxonomic treatment
In today’s publishing, a taxonomic treatment tagis used to delimit such a section. It allows to make this section findable, accessible, interoperable and reusable FAIR data. This is implemented in the Biodiversity Literature Repository, where upon deposition of the treatment a persistent DataCite digital object identifier (DOI) is minted.
Document 4:::
Information Coding Classification
The terms of the first three hierarchical levels were set out in German and English in Wissensorganisation. Entwicklung, Aufgabe, Anwendung, Zukunft, on pp. 82 to 100.
Document 5:::
Information Coding Classification
It was published in 2014 and available so far only in German. In the meantime, also the French terms of the knowledge fields have been collected. Competence for maintenance and further development rests with the German Chapter of the International Society for Knowledge Organization (ISKO) e.V. |
epfl-collab | Which of the following is correct regarding Crowdsourcing? | ['It is applicable only for binary classification problems', 'The output of Majority Decision can be equal to the one of Expectation-Maximization', 'Random Spammers give always the same answer for every question', 'Honey Pot discovers all the types of spammers but not the sloppy workers'] | B | null | Document 1:::
Crowd sourcing
Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model." Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.
Document 2:::
Crowd sourcing
The term crowdsourcing was coined in 2006 by two editors at Wired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing". Howe published a definition for the term in a blog post in June 2006: Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.
Document 3:::
Crowd sourcing
Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may be praise or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time, from experts, or from small businesses.
Document 4:::
Crowdsourcing as human-machine translation
The use of crowdsourcing and text corpus in human-machine translation (HMT) within the last few years have become predominant in their area, in comparison to solely using machine translation (MT). There have been a few recent academic journals looking into the benefits that using crowdsourcing as a translation technique could bring to the current approach to the task and how it could help improve and make more efficient the current tools available to the public.
Document 5:::
Crowd Supply
Crowd Supply is a crowdfunding platform based in Portland, Oregon. The platform has claimed "over twice the success rate of Kickstarter and Indiegogo", and partners with creators who use it, providing mentorship resembling a business incubator.Some see Crowd Supply's close management of projects as the solution to the fulfillment failure rate of other crowdfunding platforms. The site also serves as an online store for the inventories of successful campaigns.Notable projects from the platform include Andrew Huang's Novena, an open-source laptop. |
epfl-collab | When computing PageRank iteratively, the computation ends when... | ['The norm of the difference of rank vectors of two subsequent iterations falls below a predefined threshold', 'All nodes of the graph have been visited at least once', 'The difference among the eigenvalues of two subsequent iterations falls below a predefined threshold', 'The probability of visiting an unseen node falls below a predefined threshold'] | A | null | Document 1:::
PageRank
PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is.
Document 2:::
PageRank
The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, PageRank and all associated patents have expired.
Document 3:::
Hilltop algorithm
These are pages that are about a specific topic and have links to many non-affiliated pages on that topic. The original algorithm relied on independent directories with categorized links to sites. Results are ranked based on the match between the query and relevant descriptive text for hyperlinks on expert pages pointing to a given result page.
Document 4:::
Ranking
By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance, making it possible for the user quickly to select the pages they are likely to want to see. Analysis of data obtained by ranking commonly requires non-parametric statistics.
Document 5:::
Power method
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A {\displaystyle A} , the algorithm will produce a number λ {\displaystyle \lambda } , which is the greatest (in absolute value) eigenvalue of A {\displaystyle A} , and a nonzero vector v {\displaystyle v} , which is a corresponding eigenvector of λ {\displaystyle \lambda } , that is, A v = λ v {\displaystyle Av=\lambda v} . The algorithm is also known as the Von Mises iteration.Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix A {\displaystyle A} by a vector, so it is effective for a very large sparse matrix with appropriate implementation. |
epfl-collab | How does LSI querying work? | ['The query vector is treated as an additional term; then cosine similarity is computed', 'The query vector is multiplied with an orthonormal matrix; then cosine similarity is computed', 'The query vector is transformed by Matrix S; then cosine similarity is computed', 'The query vector is treated as an additional document; then cosine similarity is computed'] | D | null | Document 1:::
Data retrieval
The retrieved data may be stored in a file, printed, or viewed on the screen. A query language, like for example Structured Query Language (SQL), is used to prepare the queries. SQL is an American National Standards Institute (ANSI) standardized query language developed specifically to write database queries. Each database management system may have its own language, but most are relational.
Document 2:::
Query optimization
Query optimization is a feature of many relational database management systems and other databases such as NoSQL and graph databases. The query optimizer attempts to determine the most efficient way to execute a given query by considering the possible query plans.Generally, the query optimizer cannot be accessed directly by users: once queries are submitted to the database server, and parsed by the parser, they are then passed to the query optimizer where optimization occurs. However, some database engines allow guiding the query optimizer with hints. A query is a request for information from a database.
Document 3:::
Query optimization
It can be as simple as "find the address of a person with Social Security number 123-45-6789," or more complex like "find the average salary of all the employed married men in California between the ages 30 to 39 who earn less than their spouses." The result of a query is generated by processing the rows in a database in a way that yields the requested information. Since database structures are complex, in most cases, and especially for not-very-simple queries, the needed data for a query can be collected from a database by accessing it in different ways, through different data-structures, and in different orders.
Document 4:::
Query understanding
Query understanding is the process of inferring the intent of a search engine user by extracting semantic meaning from the searcher’s keywords. Query understanding methods generally take place before the search engine retrieves and ranks results. It is related to natural language processing but specifically focused on the understanding of search queries. Query understanding is at the heart of technologies like Amazon Alexa, Apple's Siri. Google Assistant, IBM's Watson, and Microsoft's Cortana.
Document 5:::
Query (complexity)
In descriptive complexity, a query is a mapping from structures of one signature to structures of another vocabulary. Neil Immerman, in his book Descriptive Complexity, "use the concept of query as the fundamental paradigm of computation" (p. 17). Given signatures σ {\displaystyle \sigma } and τ {\displaystyle \tau } , we define the set of structures on each language, STRUC {\displaystyle {\mbox{STRUC}}} and STRUC {\displaystyle {\mbox{STRUC}}} . A query is then any mapping I: STRUC → STRUC {\displaystyle I:{\mbox{STRUC}}\to {\mbox{STRUC}}} Computational complexity theory can then be phrased in terms of the power of the mathematical logic necessary to express a given query. |
epfl-collab | Suppose that an item in a leaf node N exists in every path. Which one is correct? | ['For every node P that is a parent of N in the fp tree, confidence(P->N) = 1', 'N’s minimum possible support is equal to the number of paths.', 'N co-occurs with its prefix in every transaction.', 'The item N exists in every candidate set.'] | B | null | Document 1:::
Tree (automata theory)
If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π. A path may be a finite or infinite set. If all paths of a tree are finite then the tree is called finite, otherwise infinite.
Document 2:::
Arborescence (graph theory)
In graph theory, an arborescence is a directed graph in which, for a vertex u (called the root) and any other vertex v, there is exactly one directed path from u to v. An arborescence is thus the directed-graph form of a rooted tree, understood here as an undirected graph.Equivalently, an arborescence is a directed, rooted tree in which all edges point away from the root; a number of other equivalent characterizations exist. Every arborescence is a directed acyclic graph (DAG), but not every DAG is an arborescence. An arborescence can equivalently be defined as a rooted digraph in which the path from the root to any other vertex is unique.
Document 3:::
Shortest-path tree
In mathematics and computer science, a shortest-path tree rooted at a vertex v of a connected, undirected graph G is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G. In connected graphs where shortest paths are well-defined (i.e. where there are no negative-length cycles), we may construct a shortest-path tree using the following algorithm: Compute dist(u), the shortest-path distance from root v to vertex u in G using Dijkstra's algorithm or Bellman–Ford algorithm. For all non-root vertices u, we can assign to u a parent vertex pu such that pu is connected to u, and that dist(pu) + edge_dist(pu,u) = dist(u). In case multiple choices for pu exist, choose pu for which there exists a shortest path from v to pu with as few edges as possible; this tie-breaking rule is needed to prevent loops when there exist zero-length cycles.
Document 4:::
Hamiltonian graph
In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected or directed graph that visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a cycle that visits each vertex exactly once. A Hamiltonian path that starts and ends at adjacent vertices can be completed by adding one more edge to form a Hamiltonian cycle, and removing any edge from a Hamiltonian cycle produces a Hamiltonian path. Determining whether such paths and cycles exist in graphs (the Hamiltonian path problem and Hamiltonian cycle problem) are NP-complete.
Document 5:::
Dynamic trees
By doing this operation on two distinct nodes, one can check whether they belong to the same tree.The represented forest may consist of very deep trees, so if we represent the forest as a plain collection of parent pointer trees, it might take us a long time to find the root of a given node. However, if we represent each tree in the forest as a link/cut tree, we can find which tree an element belongs to in O(log(n)) amortized time. Moreover, we can quickly adjust the collection of link/cut trees to changes in the represented forest. |
epfl-collab | In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)? | ['P@k-1 = P@k+1', 'R@k-1 < R@k+', 'R@k-1 = R@k+1', 'P@k-1 > P@k+1'] | B | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved.
Document 2:::
Precision and recall
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
Document 3:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 4:::
Precision and recall
More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned).
Document 5:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. |
epfl-collab | For the number of times the apriori algorithm and the FPgrowth algorithm for association rule mining are scanning the transaction database the following is true | ['apriori cannot have fewer scans than fpgrowth', 'fpgrowth and apriori can have the same number of scans', 'all three above statements are false', 'fpgrowth has always strictly fewer scans than apriori'] | B | null | Document 1:::
Apriori algorithm
Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis.
Document 2:::
Affinity analysis
Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events.
Document 3:::
Affinity analysis
Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events.
Document 4:::
Affinity analysis
The first condition or feature (A) is called antecedent and the latter (B) is known as consequent. This process is repeated until no additional frequent itemsets are found. There are two important metrics for performing the association rules mining technique: support and confidence.
Document 5:::
Affinity analysis
The first condition or feature (A) is called antecedent and the latter (B) is known as consequent. This process is repeated until no additional frequent itemsets are found. There are two important metrics for performing the association rules mining technique: support and confidence. |
epfl-collab | Given the following teleporting matrix (Ε) for nodes A, B and C:[0 ½ 0][0 0 0][0 ½ 1]and making no assumptions about the link matrix (R), which of the following is correct:(Reminder: columns are the probabilities to leave the respective node.) | ['A random walker can never leave node A', 'A random walker can always leave node B', 'A random walker can never reach node A', 'A random walker can always leave node C'] | B | null | Document 1:::
Transition rate matrix
In probability theory, a transition-rate matrix (also known as a Q-matrix, intensity matrix, or infinitesimal generator matrix) is an array of numbers describing the instantaneous rate at which a continuous-time Markov chain transitions between states. In a transition-rate matrix Q (sometimes written A), element qij (for i ≠ j) denotes the rate departing from i and arriving in state j. Diagonal elements qii are defined such that q i i = − ∑ j ≠ i q i j {\displaystyle q_{ii}=-\sum _{j\neq i}q_{ij}} ,and therefore the rows of the matrix sum to zero. (See condition 3 in the definition section.)
Document 2:::
Network probability matrix
The network probability matrix describes the probability structure of a network based on the historical presence or absence of edges in a network. For example, individuals in a social network are not connected to other individuals with uniform random probability. The probability structure is much more complex. Intuitively, there are some people whom a person will communicate with or be connected more closely than others.
Document 3:::
Random walk closeness centrality
, j ) = ( I − M − j ) − 1 e {\displaystyle H(.,j)=(I-M_{-j})^{-1}e} where H ( . , j ) {\displaystyle H(.,j)} is the vector for first passage times for a walk ending at node j, and e is an n-1 dimensional vector of ones. Mean first passage time is not symmetric, even for undirected graphs.
Document 4:::
Graph Laplacian
For example, let e i {\textstyle e_{i}} denote the i-th standard basis vector. Then x = e i P {\textstyle x=e_{i}P} is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex i {\textstyle i} ; i.e., x j = P ( v i → v j ) {\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)} .
Document 5:::
Markov arrival process
A Markov arrival process is defined by two matrices, D0 and D1 where elements of D0 represent hidden transitions and elements of D1 observable transitions. The block matrix Q below is a transition rate matrix for a continuous-time Markov chain. Q = . {\displaystyle Q=\left\;.} The simplest example is a Poisson process where D0 = −λ and D1 = λ where there is only one possible transition, it is observable, and occurs at rate λ. For Q to be a valid transition rate matrix, the following restrictions apply to the Di 0 ≤ i , j < ∞ 0 ≤ i , j < ∞ i ≠ j i , i < 0 ( D 0 + D 1 ) 1 = 0 {\displaystyle {\begin{aligned}0\leq _{i,j}&<\infty \\0\leq _{i,j}&<\infty \quad i\neq j\\\,_{i,i}&<0\\(D_{0}+D_{1}){\boldsymbol {1}}&={\boldsymbol {0}}\end{aligned}}} |
epfl-collab | Which of the following methods does not exploit statistics on the co-occurrence of words in a text? | ['Vector space retrieval\n\n\n', 'Transformers\n\n\n', 'Word embeddings\n\n\n', 'Fasttext'] | A | null | Document 1:::
Random indexing
In Euclidean spaces, random projections are elucidated using the Johnson–Lindenstrauss lemma.The TopSig technique extends the random indexing model to produce bit vectors for comparison with the Hamming distance similarity function. It is used for improving the performance of information retrieval and document clustering. In a similar line of research, Random Manhattan Integer Indexing (RMII) is proposed for improving the performance of the methods that employ the Manhattan distance between text units. Many random indexing methods primarily generate similarity from co-occurrence of items in a corpus. Reflexive Random Indexing (RRI) generates similarity from co-occurrence and from shared occurrence with other items.
Document 2:::
Noisy text analytics
Noisy text analytics is a process of information extraction whose goal is to automatically extract structured or semistructured information from noisy unstructured text data. While Text analytics is a growing and mature field that has great value because of the huge amounts of data being produced, processing of noisy text is gaining in importance because a lot of common applications produce noisy text data. Noisy unstructured text data is found in informal settings such as online chat, text messages, e-mails, message boards, newsgroups, blogs, wikis and web pages. Also, text produced by processing spontaneous speech using automatic speech recognition and printed or handwritten text using optical character recognition contains processing noise.
Document 3:::
Cosine similarity
For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents.The technique is also used to measure cohesion within clusters in the field of data mining.One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data.
Document 4:::
Random indexing
Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately. This is the original point of the random projection approach to dimension reduction first formulated as the Johnson–Lindenstrauss lemma, and locality-sensitive hashing has some of the same starting points. Random indexing, as used in representation of language, originates from the work of Pentti Kanerva on sparse distributed memory, and can be described as an incremental formulation of a random projection.It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces.
Document 5:::
Biomedical text mining
Biomedical text mining (including biomedical natural language processing or BioNLP) refers to the methods and study of how text mining may be applied to texts and literature of the biomedical domain. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies in this field have been applied to the biomedical literature available through services such as PubMed. In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. |
epfl-collab | Which attribute gives the best split?A1PNa44b44A2PNx51y33A3PNt61j23 | ['A1', 'All the same', 'A3', 'A2'] | C | null | Document 1:::
Split (graph theory)
In graph theory, a split of an undirected graph is a cut whose cut-set forms a complete bipartite graph. A graph is prime if it has no splits. The splits of a graph can be collected into a tree-like structure called the split decomposition or join decomposition, which can be constructed in linear time. This decomposition has been used for fast recognition of circle graphs and distance-hereditary graphs, as well as for other problems in graph algorithms. Splits and split decompositions were first introduced by Cunningham (1982), who also studied variants of the same notions for directed graphs.
Document 2:::
Iterative Dichotomiser 3
Calculate the entropy of every attribute a {\displaystyle a} of the data set S {\displaystyle S} . Partition ("split") the set S {\displaystyle S} into subsets using the attribute for which the resulting entropy after splitting is minimized; or, equivalently, information gain is maximum. Make a decision tree node containing that attribute. Recurse on subsets using the remaining attributes.
Document 3:::
Split (phylogenetics)
A split in phylogenetics is a bipartition of a set of taxa, and the smallest unit of information in unrooted phylogenetic trees: each edge of an unrooted phylogenetic tree represents one split, and the tree can be efficiently reconstructed from its set of splits. Moreover, when given several trees, the splits occurring in more than half of these trees give rise to a consensus tree, and the splits occurring in a smaller fraction of the trees generally give rise to a consensus Split Network.
Document 4:::
AZ64
AZ64 or AZ64 Encoding is a data compression algorithm proprietary to Amazon Web Services.Amazon claims better compression and better speed than raw, LZO or Zstandard, when used in Amazon's Redshift service. == References ==
Document 5:::
842 (compression algorithm)
842, 8-4-2, or EFT is a data compression algorithm. It is a variation on Lempel–Ziv compression with a limited dictionary length. With typical data, 842 gives 80 to 90 percent of the compression of LZ77 with much faster throughput and less memory use. Hardware implementations also provide minimal use of energy and minimal chip area. 842 compression can be used for virtual memory compression, for databases — especially column-oriented stores, and when streaming input-output — for example to do backups or to write to log files. |
epfl-collab | Suppose that q is density reachable from p. The chain of points that ensure this relationship are {t,u,g,r} Which one is FALSE? | ['p has to be a core point', 'q has to be a border point', 'p and q will also be density-connected', '{t,u,g,r} have to be all core points.'] | B | null | Document 1:::
Density point
In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} , the "density" of A is 0 or 1 at almost every point in R n {\displaystyle \mathbb {R} ^{n}} . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as d ε ( x ) = μ ( A ∩ B ε ( x ) ) μ ( B ε ( x ) ) {\displaystyle d_{\varepsilon }(x)={\frac {\mu (A\cap B_{\varepsilon }(x))}{\mu (B_{\varepsilon }(x))}}} where Bε denotes the closed ball of radius ε centered at x. Lebesgue's density theorem asserts that for almost every point x of A the density d ( x ) = lim ε → 0 d ε ( x ) {\displaystyle d(x)=\lim _{\varepsilon \to 0}d_{\varepsilon }(x)} exists and is equal to 0 or 1.
Document 2:::
Density point
The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion.
Document 3:::
Density point
In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and μ(Rn \ A) > 0, then there are always points of Rn where the density is neither 0 nor 1. For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4.
Document 4:::
Contiguity (probability theory)
By the aforementioned logic, this statement is also false. It is possible however that each of the measures Qn be absolutely continuous with respect to Pn, while the sequence Qn not being contiguous with respect to Pn. The fundamental Radon–Nikodym theorem for absolutely continuous measures states that if Q is absolutely continuous with respect to P, then Q has density with respect to P, denoted as ƒ = dQ⁄dP, such that for any measurable set A Q ( A ) = ∫ A f d P , {\displaystyle Q(A)=\int _{A}f\,\mathrm {d} P,\,} which is interpreted as being able to "reconstruct" the measure Q from knowing the measure P and the derivative ƒ. A similar result exists for contiguous sequences of measures, and is given by the Le Cam's third lemma.
Document 5:::
Limiting density of discrete points
In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Shannon for differential entropy. It was formulated by Edwin Thompson Jaynes to address defects in the initial definition of differential entropy. |
epfl-collab | In User-Based Collaborative Filtering, which of the following is correct, assuming that all the ratings are positive? | ['Pearson Correlation Coefficient and Cosine Similarity have the same value range, but can return different similarity ranking for the users', 'Pearson Correlation Coefficient and Cosine Similarity have different value range, but return the same similarity ranking for the users', 'If the variance of the ratings of one of the users is 0, then their Cosine Similarity is not computable', 'If the ratings of two users have both variance equal to 0, then their Cosine Similarity is maximized'] | D | null | Document 1:::
Precision and recall
For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity.
Document 2:::
Evaluation measures (information retrieval)
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 3:::
Evaluation measures (information retrieval)
Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection.
Document 4:::
Graph Laplacian
Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization.
Document 5:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. |
epfl-collab | The term frequency of a term is normalized | ['by the maximal frequency of the term in the document collection', 'by the maximal frequency of all terms in the document', 'by the maximal term frequency of any document in the collection', 'by the maximal frequency of any term in the vocabulary'] | B | null | Document 1:::
Cycles per sample
In digital signal processing (DSP), a normalized frequency is a ratio of a variable frequency (f) and a constant frequency associated with a system (such as a sampling rate, fs). Some software applications require normalized inputs and produce normalized outputs, which can be re-scaled to physical units when necessary. Mathematical derivations are usually done in normalized units, relevant to a wide range of applications.
Document 2:::
Normalization constant
In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probability density function, which gives the standard normal distribution. In Bayes' theorem, a normalizing constant is used to ensure that the sum of all possible hypotheses equals 1. Other uses of normalizing constants include making the value of a Legendre polynomial at 1 and in the orthogonality of orthonormal functions. A similar concept has been used in areas other than probability, such as for polynomials.
Document 3:::
Cross-spectral density
{\displaystyle \Delta t\to 0.} But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency)
Document 4:::
Normalization constant
In probability theory, a normalizing constant is a constant by which an everywhere non-negative function must be multiplied so the area under its graph is 1, e.g., to make it a probability density function or a probability mass function.
Document 5:::
Cumulative frequency analysis
Frequency analysis is the analysis of how often, or how frequently, an observed phenomenon occurs in a certain range. Frequency analysis applies to a record of length N of observed data X1, X2, X3 . . |
epfl-collab | Which is an appropriate method for fighting skewed distributions of class labels in classification? | ['Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels', 'Use leave-one-out cross validation', 'Generate artificial data points for the most frequent classes', 'Include an over-proportional number of samples from the larger class'] | B | null | Document 1:::
Multi-label classification
In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (more than two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y.
Document 2:::
Multiclass classifier
In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). While many classification algorithms (notably multinomial logistic regression) naturally permit the use of more than two classes, some are by nature binary algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies. Multiclass classification should not be confused with multi-label classification, where multiple labels are to be predicted for each instance.
Document 3:::
Loss functions for classification
In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given X {\displaystyle {\mathcal {X}}} as the space of all possible inputs (usually X ⊂ R d {\displaystyle {\mathcal {X}}\subset \mathbb {R} ^{d}} ), and Y = { − 1 , 1 } {\displaystyle {\mathcal {Y}}=\{-1,1\}} as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function f: X → Y {\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}} which best predicts a label y {\displaystyle y} for a given input x → {\displaystyle {\vec {x}}} . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same x → {\displaystyle {\vec {x}}} to generate different y {\displaystyle y} . As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as I = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y {\displaystyle I=\displaystyle \int _{{\mathcal {X}}\times {\mathcal {Y}}}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy} where V ( f ( x → ) , y ) {\displaystyle V(f({\vec {x}}),y)} is a given loss function, and p ( x → , y ) {\displaystyle p({\vec {x}},y)} is the probability density function of the process that generated the data, which can equivalently be written as p ( x → , y ) = p ( y ∣ x → ) p ( x → ) .
Document 4:::
Classification algorithm
This category is about statistical classification algorithms. For more information, see Statistical classification.
Document 5:::
One-class classification
In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class, although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes, motor failure prediction, or the operational status of a nuclear plant as 'normal': In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known. While many of the above approaches focus on the case of removing a small number of outliers or anomalies, one can also learn the other extreme, where the single class covers a small coherent subset of the data, using an information bottleneck approach. |
epfl-collab | Thang, Jeremie and Tugrulcan have built their own search engines. For a query Q, they got precision scores of 0.6, 0.7, 0.8 respectively. Their F1 scores (calculated by same parameters) are same. Whose search engine has a higher recall on Q? | ['Thang', 'Jeremie', 'We need more information', 'Tugrulcan'] | A | null | Document 1:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 2:::
Precision and recall
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
Document 3:::
Average precision
Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection.
Document 4:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query.
Document 5:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plausibility of a given query. |
epfl-collab | When compressing the adjacency list of a given URL, a reference list | ['Is chosen from neighboring URLs that can be reached in a small number of hops', 'All of the above', 'May contain URLs not occurring in the adjacency list of the given URL', 'Lists all URLs not contained in the adjacency list of given URL'] | C | null | Document 1:::
Adjacency list
In graph theory and computer science, an adjacency list is a collection of unordered lists used to represent a finite graph. Each unordered list within an adjacency list describes the set of neighbors of a particular vertex in the graph. This is one of several commonly used representations of graphs for use in computer programs.
Document 2:::
Compressed data structure
The term compressed data structure arises in the computer science subfields of algorithms, data structures, and theoretical computer science. It refers to a data structure whose operations are roughly as fast as those of a conventional data structure for the problem, but whose size can be substantially smaller. The size of the compressed data structure is typically highly dependent upon the information entropy of the data being represented. Important examples of compressed data structures include the compressed suffix array and the FM-index,both of which can represent an arbitrary text of characters T for pattern matching.
Document 3:::
Compressed data structure
In other words, they simultaneously provide a compressed and quickly searchable representation of the text T. They represent a substantial space improvement over the conventional suffix tree and suffix array, which occupy many times more space than the size of T. They also support searching for arbitrary patterns, as opposed to the inverted index, which can support only word-based searches. In addition, inverted indexes do not have the self-indexing feature. An important related notion is that of a succinct data structure, which uses space roughly equal to the information-theoretic minimum, which is a worst-case notion of the space needed to represent the data.
Document 4:::
Compressed data structure
Given any input pattern P, they support the operation of finding if and where P appears in T. The search time is proportional to the sum of the length of pattern P, a very slow-growing function of the length of the text T, and the number of reported matches. The space they occupy is roughly equal to the size of the text T in entropy-compressed form, such as that obtained by Prediction by Partial Matching or gzip. Moreover, both data structures are self-indexing, in that they can reconstruct the text T in a random access manner, and thus the underlying text T can be discarded.
Document 5:::
Succinct data structure
In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, insofar as the size of the stored or encoded data similarly depends upon the specific content of the data itself. |
epfl-collab | Data being classified as unstructured or structured depends on the: | ['Degree of abstraction', 'Level of human involvement', 'Type of physical storage', 'Amount of data '] | A | null | Document 1:::
Unstructured data
Unstructured data (or unstructured information) is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents. In 1998, Merrill Lynch said "unstructured data comprises the vast majority of data found in an organization, some estimates run as high as 80%."
Document 2:::
Semi-structured data
Semi-structured data is a form of structured data that does not obey the tabular structure of data models associated with relational databases or other forms of data tables, but nonetheless contains tags or other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known as self-describing structure. In semi-structured data, the entities belonging to the same class may have different attributes even though they are grouped together, and the attributes' order is not important. Semi-structured data are increasingly occurring since the advent of the Internet where full-text documents and databases are not the only forms of data anymore, and different applications need a medium for exchanging information. In object-oriented databases, one often finds semi-structured data.
Document 3:::
Structured data analysis (statistics)
Structured data analysis is the statistical data analysis of structured data. This can arise either in the form of an a priori structure such as multiple-choice questionnaires or in situations with the need to search for structure that fits the given data, either exactly or approximately. This structure can then be used for making comparisons, predictions, manipulations etc.
Document 4:::
Structured data analysis (systems analysis)
Structured data analysis (SDA) is a method for analysing the flow of information within an organization using data flow diagrams. It was originally developed by IBM for systems analysis in electronic data processing, although it has now been adapted for use to describe the flow of information in any kind of project or organization, particularly in the construction industry where the nodes could be departments, contractors, customers, managers, workers etc.
Document 5:::
Structure mining
Structure mining or structured data mining is the process of finding and extracting useful information from semi-structured data sets. Graph mining, sequential pattern mining and molecule mining are special cases of structured data mining. |
epfl-collab | Suppose you have a search engine that retrieves the top 100 documents and
achieves 90% precision and 20% recall. You modify the search engine to
retrieve the top 200 and mysteriously, the precision stays the same. Which one
is CORRECT? | ['The F-score stays the same', 'This is not possible', 'The number of relevant documents is 450', 'The recall becomes 10%'] | C | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e t r i e v e d _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {retrieved}}\_instances}}} . Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved.
Document 2:::
Precision and recall
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
Document 3:::
Average precision
Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection.
Document 4:::
Precision and recall
More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors, for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned).
Document 5:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query. |
epfl-collab | In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed: | ['That the class label correlates with the feature', 'That the class label is independent of the feature', 'That the class labels depends on the feature', 'None of the above'] | B | null | Document 1:::
5 sigma
In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is or, by using summation notation, If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this case, the standard deviation will be
Document 2:::
Sparse Distributed Memory
The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d). The normal distribution F with mean n/2 and standard deviation n / 2 {\displaystyle {\sqrt {n}}/2} is a good approximation to it: N(d) = Pr{d(x, y) ≤ d} ≅ F{(d − n / 2)/ n / 4 {\displaystyle {\sqrt {n/4}}} } Tendency to orthogonalityAn outstanding property of N is that most of it lies at approximately the mean (indifference) distance n/2 from a point (and its complement). In other words, most of the space is nearly orthogonal to any given point, and the larger n is, the more pronounced is this effect.
Document 3:::
Sparse distributed memory
The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d). The normal distribution F with mean n/2 and standard deviation n / 2 {\displaystyle {\sqrt {n}}/2} is a good approximation to it: N(d) = Pr{d(x, y) ≤ d} ≅ F{(d − n / 2)/ n / 4 {\displaystyle {\sqrt {n/4}}} } Tendency to orthogonalityAn outstanding property of N is that most of it lies at approximately the mean (indifference) distance n/2 from a point (and its complement). In other words, most of the space is nearly orthogonal to any given point, and the larger n is, the more pronounced is this effect.
Document 4:::
Probit function
If we consider the familiar fact that the standard normal distribution places 95% of probability between −1.96 and 1.96, and is symmetric around zero, it follows that Φ ( − 1.96 ) = 0.025 = 1 − Φ ( 1.96 ) . {\displaystyle \Phi (-1.96)=0.025=1-\Phi (1.96).\,\!} The probit function gives the 'inverse' computation, generating a value of a standard normal random variable, associated with specified cumulative probability. Continuing the example, probit ( 0.025 ) = − 1.96 = − probit ( 0.975 ) {\displaystyle \operatorname {probit} (0.025)=-1.96=-\operatorname {probit} (0.975)} .In general, Φ ( probit ( p ) ) = p {\displaystyle \Phi (\operatorname {probit} (p))=p} and probit ( Φ ( z ) ) = z . {\displaystyle \operatorname {probit} (\Phi (z))=z.}
Document 5:::
Chi distribution
If Z 1 , … , Z k {\displaystyle Z_{1},\ldots ,Z_{k}} are k {\displaystyle k} independent, normally distributed random variables with mean 0 and standard deviation 1, then the statistic Y = ∑ i = 1 k Z i 2 {\displaystyle Y={\sqrt {\sum _{i=1}^{k}Z_{i}^{2}}}} is distributed according to the chi distribution. The chi distribution has one positive integer parameter k {\displaystyle k} , which specifies the degrees of freedom (i.e. the number of random variables Z i {\displaystyle Z_{i}} ). The most familiar examples are the Rayleigh distribution (chi distribution with two degrees of freedom) and the Maxwell–Boltzmann distribution of the molecular speeds in an ideal gas (chi distribution with three degrees of freedom). |
epfl-collab | Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents? | ['The cost of predicting a word is linear in the lengths of the text preceding the word.', 'The label of one word is predicted based on all the previous labels', 'An HMM model can be built using words enhanced with morphological features as input.', 'The cost of learning the model is quadratic in the lengths of the text.'] | C | null | Document 1:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 2:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 3:::
Maximum-entropy Markov model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction.
Document 4:::
Semantic analysis (machine learning)
A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Document 5:::
Text categorization
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. |
epfl-collab | 10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has: | ['5% support and 10% confidence', '5% support and 50% confidence', '10% support and 50% confidence', '10% support and 10% confidence'] | B | null | Document 1:::
Inclusion-exclusion principle
In combinatorics, a branch of mathematics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as | A ∪ B | = | A | + | B | − | A ∩ B | {\displaystyle |A\cup B|=|A|+|B|-|A\cap B|} where A and B are two finite sets and |S | indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A, B and C is given by | A ∪ B ∪ C | = | A | + | B | + | C | − | A ∩ B | − | A ∩ C | − | B ∩ C | + | A ∩ B ∩ C | {\displaystyle |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|} This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula.
Document 2:::
Affinity analysis
Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events.
Document 3:::
Affinity analysis
Also, a priori algorithm is used to reduce the search space for the problem.The support metric in the association rule learning algorithm is defined as the frequency of the antecedent or consequent appearing together in a data set. Moreover, confidence is expressed as the reliability of the association rules determined by the ratio of the data records containing both A and B. The minimum threshold for support and confidence are inputs to the model. Considering all the above-mentioned definitions, affinity analysis can develop rules that will predict the occurrence of an event based on the occurrence of other events.
Document 4:::
Subset inclusion
In mathematics, set A is a subset of a set B if all elements of A are also elements of B; B is then a superset of A. It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B. The relationship of one set being a subset of another is called inclusion (or sometimes containment). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B. A k-subset is a subset with k elements. The subset relation defines a partial order on sets. In fact, the subsets of a given set form a Boolean algebra under the subset relation, in which the join and meet are given by intersection and union, and the subset relation itself is the Boolean inclusion relation.
Document 5:::
Item tree analysis
Other typical examples are questionnaires where the items are statements to which subjects can agree (1) or disagree (0). Depending on the content of the items it is possible that the response of a subject to an item j determines her or his responses to other items. It is, for example, possible that each subject who agrees to item j will also agree to item i. In this case we say that item j implies item i (short i → j {\displaystyle i\rightarrow j} ). The goal of an ITA is to uncover such deterministic implications from the data set D. |
epfl-collab | Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents? | ['When computing the emission probabilities, a word can be replaced by a morphological feature (e.g., the number of uppercase first characters)', 'HMMs cannot predict the label of a word that appears only in the test set', 'If the smoothing parameter λ is equal to 1, the emission probabilities for all the words in the test set will be equal', 'The label of one word is predicted based on all the previous labels'] | A | null | Document 1:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 2:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 3:::
Maximum-entropy Markov model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction.
Document 4:::
Semantic analysis (machine learning)
A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Document 5:::
Text categorization
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. |
epfl-collab | A basic statement in RDF would be expressed in the relational data model by a table | ['with three attributes', 'with one attribute', 'with two attributes', 'cannot be expressed in the relational data model'] | C | null | Document 1:::
Relational Model
The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database. The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Document 2:::
Relational Model
Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.
Document 3:::
Logical schema
A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology.
Document 4:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 5:::
Oracle NoSQL Database
NoSQL Database supports tabular model. Each row is identified by a unique key, and has a value, of arbitrary length, which is interpreted by the application. The application can manipulate (insert, delete, update, read) a single row in a transaction. The application can also perform an iterative, non-transactional scan of all the rows in the database. |
epfl-collab | Which of the following statements is wrong regarding RDF? | ['The object value of a type statement corresponds to a table name in SQL', 'Blank nodes in RDF graphs correspond to the special value NULL in SQL', 'RDF graphs can be encoded as SQL databases', 'An RDF statement would be expressed in SQL as a tuple in a table'] | B | null | Document 1:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 2:::
List of SPARQL implementations
This list shows notable triplestores, APIs, and other storage engines that have implemented the W3C SPARQL standard. Amazon Neptune Apache Marmotta AllegroGraph Eclipse RDF4J Apache Jena with ARQ Blazegraph Cray Urika-GD IBM Db2 - Removed in v11.5. KAON2 MarkLogic Mulgara NitrosBase Ontotext GraphDB Oracle DB Enterprise Spatial & Graph RDFLib Python library Redland / Redstore Virtuoso
Document 3:::
NGSI-LD
The NGSI-LD information model represents Context Information as entities that have properties and relationships to other entities. It is derived from property graphs, with semantics formally defined on the basis of RDF and the semantic web framework. It can be serialized using JSON-LD. Every entity and relationship is given a unique IRI reference as identifier, making the corresponding data exportable as linked data datasets. The -LD suffix denotes this affiliation to the linked data universe.
Document 4:::
Knowledge discovery
Knowledge extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL (data warehouse), the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data. The RDB2RDF W3C group is currently standardizing a language for extraction of resource description frameworks (RDF) from relational databases. Another popular example for knowledge extraction is the transformation of Wikipedia into structured data and also the mapping to existing knowledge (see DBpedia and Freebase).
Document 5:::
Global Research Identifier Database
The 30th public release of GRID was on 27 August 2018, and the database contained 89,506 entries. It is available in the Resource Description Framework (RDF) specification as linked data, and can therefore be linked to other data. Containing 14,401 relationships, GRID models two types of relationships: a parent-child relationship that defines a subordinate association, and a related relationship that describes other associationsIn December 2016, Digital Science released GRID under a Creative Commons CC0 licence — without restriction under copyright or database law.The database is available for download as a ZIP archive, which includes the entire database in JSON and CSV file formats.From all the sources which it draws information, including funding datasets, Digital Science claims that GRID covers 92% of institutions. |
epfl-collab | The number of non-zero entries in a column of a term-document matrix indicates: | ['how relevant a term is for a document ', 'how many terms of the vocabulary a document contains', 'none of the other responses is correct', 'how often a term of the vocabulary occurs in a document'] | B | null | Document 1:::
Zero matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m × n {\displaystyle m\times n} matrices, and is denoted by the symbol O {\displaystyle O} or 0 {\displaystyle 0} followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are 0 1 , 1 = , 0 2 , 2 = , 0 2 , 3 = . {\displaystyle 0_{1,1}={\begin{bmatrix}0\end{bmatrix}},\ 0_{2,2}={\begin{bmatrix}0&0\\0&0\end{bmatrix}},\ 0_{2,3}={\begin{bmatrix}0&0&0\\0&0&0\end{bmatrix}}.\ }
Document 2:::
Pascal matrix
The non-zero elements of a Pascal matrix are given by the binomial coefficients: such that the indices i, j start at 0, and ! denotes the factorial.
Document 3:::
Zernike polynomials
Applications often involve linear algebra, where an integral over a product of Zernike polynomials and some other factor builds a matrix elements. To enumerate the rows and columns of these matrices by a single index, a conventional mapping of the two indices n and l to a single index j has been introduced by Noll. The table of this association Z n l → Z j {\displaystyle Z_{n}^{l}\rightarrow Z_{j}} starts as follows (sequence A176988 in the OEIS). j = n ( n + 1 ) 2 + | l | + { 0 , l > 0 ∧ n ≡ { 0 , 1 } ( mod 4 ) ; 0 , l < 0 ∧ n ≡ { 2 , 3 } ( mod 4 ) ; 1 , l ≥ 0 ∧ n ≡ { 2 , 3 } ( mod 4 ) ; 1 , l ≤ 0 ∧ n ≡ { 0 , 1 } ( mod 4 ) .
Document 4:::
Boolean model of information retrieval
An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Letbe the set of all such index terms. A document is any subset of T {\displaystyle T} . Letbe the set of all documents.
Document 5:::
Logical matrix
A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science. |
epfl-collab | What is TRUE regarding Fagin's algorithm? | ['It performs a complete scan over the posting files', 'It provably returns the k documents with the largest aggregate scores', 'Posting files need to be indexed by TF-IDF weights', 'It never reads more than (kn)1⁄2 entries from a posting list'] | B | null | Document 1:::
Fagin's theorem
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
Document 2:::
Ford-Fulkerson algorithm
The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson.
Document 3:::
Ford-Fulkerson algorithm
The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path.
Document 4:::
Fibonacci search technique
In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation).
Document 5:::
Faugère's F4 and F5 algorithms
This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. |
epfl-collab | A false negative in sampling can only occur for itemsets with support smaller than | ['p*s', 'p*m', 'the threshold s', 'None of the above'] | D | null | Document 1:::
Multiple comparisons problem
However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, the expected number of incorrect rejections (also known as false positives or Type I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%. The multiple comparisons problem also applies to confidence intervals.
Document 2:::
Precision and recall
Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting a hypothesis-testing approach from statistics, in which, in this case, the null hypothesis is that a given item is irrelevant (i.e., not a dog), absence of type I and type II errors (i.e., perfect specificity and sensitivity of 100% each) corresponds respectively to perfect precision (no false positive) and perfect recall (no false negative).
Document 3:::
False positive rate
In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification). The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio.
Document 4:::
Type I error rate
In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative.
Document 5:::
Precision and recall
For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity. |
epfl-collab | Why is XML a document model? | ['It has a serialized representation', 'It supports domain-specific schemas', 'It uses HTML tags', 'It supports application-specific markup'] | A | null | Document 1:::
XML schema
An XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntactical constraints imposed by XML itself. These constraints are generally expressed using some combination of grammatical rules governing the order of elements, Boolean predicates that the content must satisfy, data types governing the content of elements and attributes, and more specialized rules such as uniqueness and referential integrity constraints. There are languages developed specifically to express XML schemas. The document type definition (DTD) language, which is native to the XML specification, is a schema language that is of relatively limited capability, but that also has other uses in XML aside from the expression of schemas.
Document 2:::
Root element
DOM Level 1 defines, for every XML document, an object representation of the document itself and an attribute or property on the document called documentElement. This property provides access to an object of type element which directly represents the root element of the document. There can be other XML nodes outside of the root element.
Document 3:::
Object modeling
Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.An object model consists of the following important features: Object reference Objects can be accessed via object references.
Document 4:::
Root element
Each XML document has exactly one single root element. It encloses all the other elements and is, therefore, the sole parent element to all the other elements. ROOT elements are also called document elements. In HTML, the root element is the element.The World Wide Web Consortium defines not only the specifications for XML itself, but also the DOM, which is a platform- and language-independent standard object model for representing XML documents.
Document 5:::
XML validation
XML validation is the process of checking a document written in XML (eXtensible Markup Language) to confirm that it is both well-formed and also "valid" in that it follows a defined structure. A well-formed document follows the basic syntactic rules of XML, which are the same for all XML documents. A valid document also respects the rules dictated by a particular DTD or XML schema. Automated tools – validators – can perform well-formedness tests and many other validation tests, but not those that require human judgement, such as correct application of a schema to a data set. |
epfl-collab | A retrieval model attempts to capture | ['the interface by which a user is accessing information', 'the structure by which a document is organised', 'the importance a user gives to a piece of information for a query', 'the formal correctness of a query formulation by user'] | C | null | Document 1:::
Boolean model of information retrieval
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. It is used by many IR systems to this day. The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms (a bag-of-words model). Retrieval is based on whether or not the documents contain the query terms.
Document 2:::
Boolean model of information retrieval
This operation is called retrieval and consists of the following two steps: 1. For each W j {\textstyle W_{j}} in Q {\textstyle Q} , find the set S j {\textstyle S_{j}} of documents that satisfy W j {\textstyle W_{j}} :2. Then the set of documents that satisfy Q is given by:
Document 3:::
Knowledge retrieval
Knowledge retrieval seeks to return information in a structured form, consistent with human cognitive processes as opposed to simple lists of data items. It draws on a range of fields including epistemology (theory of knowledge), cognitive psychology, cognitive neuroscience, logic and inference, machine learning and knowledge discovery, linguistics, and information technology.
Document 4:::
Document retrieval
Document retrieval is sometimes referred to as, or as a branch of, text retrieval. Text retrieval is a branch of information retrieval where the information is stored primarily in the form of text. Text databases became decentralized thanks to the personal computer. Text retrieval is a critical area of study today, since it is the fundamental basis of all internet search engines.
Document 5:::
Document retrieval
Document retrieval is defined as the matching of some stated user query against a set of free-text records. These records could be any type of mainly unstructured text, such as newspaper articles, real estate records or paragraphs in a manual. User queries can range from multi-sentence full descriptions of an information need to a few words. |
epfl-collab | When computing HITS, the initial values | ['Are set all to 1', 'Are set all to 1/n', 'Are chosen randomly', 'Are set all to 1/sqrt(n)'] | D | null | Document 1:::
Initial condition
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value,: pp. 160 is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time.
Document 2:::
Hit detection
In computer graphics programming, hit-testing (hit detection, picking, or pick correlation) is the process of determining whether a user-controlled cursor (such as a mouse cursor or touch-point on a touch-screen interface) intersects a given graphical object (such as a shape, line, or curve) drawn on the screen. Hit-testing may be performed on the movement or activation of a mouse or other pointing device. Hit-testing is used by GUI environments to respond to user actions, such as selecting a menu item or a target in a game based on its visual location. In web programming languages such as HTML, SVG, and CSS, this is associated with the concept of pointer-events (e.g. user-initiated cursor movement or object selection). Collision detection is a related concept for detecting intersections of two or more different graphical objects, rather than intersection of a cursor with one or more graphical objects.
Document 3:::
Impact parameter
In physics, the impact parameter b is defined as the perpendicular distance between the path of a projectile and the center of a potential field U(r) created by an object that the projectile is approaching (see diagram). It is often referred to in nuclear physics (see Rutherford scattering) and in classical mechanics. The impact parameter is related to the scattering angle θ by θ = π − 2 b ∫ r min ∞ d r r 2 1 − ( b / r ) 2 − 2 U / ( m v ∞ 2 ) , {\displaystyle \theta =\pi -2b\int _{r_{\text{min}}}^{\infty }{\frac {dr}{r^{2}{\sqrt {1-(b/r)^{2}-2U/(mv_{\infty }^{2})}}}},} where v∞ is the velocity of the projectile when it is far from the center, and rmin is its closest distance from the center.
Document 4:::
Predicted impact point
Modern combat aircraft are equipped to calculate the PIP for onboard weapons at any given time. Using the PIP marker, pilots can achieve good accuracy at ranges of up to several kilometers, whether the target is ground-based or airborne. Variables included in the calculation are aircraft velocity, target velocity, target elevation, distance to target, forces on the projectile (drag, gravity), and others.
Document 5:::
Binary collision approximation
In condensed-matter physics, the binary collision approximation (BCA) is a heuristic used to more efficiently simulate the penetration depth and defect production by energetic ions (with kinetic energies in the kilo-electronvolt (keV) range or higher) in solids. In the method, the ion is approximated to travel through a material by experiencing a sequence of independent binary collisions with sample atoms (nuclei). Between the collisions, the ion is assumed to travel in a straight path, experiencing electronic stopping power, but losing no energy in collisions with nuclei. |
epfl-collab | When indexing a document collection using an inverted file, the main space requirement is implied by | ['The postings file', 'The vocabulary', 'The index file', 'The access structure'] | A | null | Document 1:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines.
Document 2:::
Inverted index
Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204. There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Document 3:::
Compressed data structure
In other words, they simultaneously provide a compressed and quickly searchable representation of the text T. They represent a substantial space improvement over the conventional suffix tree and suffix array, which occupy many times more space than the size of T. They also support searching for arbitrary patterns, as opposed to the inverted index, which can support only word-based searches. In addition, inverted indexes do not have the self-indexing feature. An important related notion is that of a succinct data structure, which uses space roughly equal to the information-theoretic minimum, which is a worst-case notion of the space needed to represent the data.
Document 4:::
Extent (file systems)
In computing, an extent is a contiguous area of storage reserved for a file in a file system, represented as a range of block numbers, or tracks on count key data devices. A file can consist of zero or more extents; one file fragment requires one extent. The direct benefit is in storing each range compactly as two numbers, instead of canonically storing every block number in the range. Also, extent allocation results in less file fragmentation.
Document 5:::
Extent (file systems)
Extent-based file systems can also eliminate most of the metadata overhead of large files that would traditionally be taken up by the block-allocation tree. But because the savings are small compared to the amount of stored data (for all file sizes in general) but make up a large portion of the metadata (for large files), the overall benefits in storage efficiency and performance are slight.In order to resist fragmentation, several extent-based file systems do allocate-on-flush. Many modern fault-tolerant file systems also do copy-on-write, although that increases fragmentation. |
epfl-collab | In an FP tree, the leaf nodes are the ones with: | ['Lowest confidence', 'None of the other options.', 'Least in the alphabetical order', 'Lowest support'] | D | null | Document 1:::
Leaf power
In the mathematical area of graph theory, a k-leaf power of a tree T is a graph G whose vertices are the leaves of T and whose edges connect pairs of leaves whose distance in T is at most k. That is, G is an induced subgraph of the graph power T k {\displaystyle T^{k}} , induced by the leaves of T. For a graph G constructed in this way, T is called a k-leaf root of G. A graph is a leaf power if it is a k-leaf power for some k. These graphs have applications in phylogeny, the problem of reconstructing evolutionary trees.
Document 2:::
Leaflet (botany)
A leaflet (occasionally called foliole) in botany is a leaf-like part of a compound leaf. Though it resembles an entire leaf, a leaflet is not borne on a main plant stem or branch, as a leaf is, but rather on a petiole or a branch of the leaf. Compound leaves are common in many plant families and they differ widely in morphology. The two main classes of compound leaf morphology are palmate and pinnate. For example, a hemp plant has palmate compound leaves, whereas some species of Acacia have pinnate leaves. The ultimate free division (or leaflet) of a compound leaf, or a pinnate subdivision of a multipinnate leaf is called a pinnule or pinnula.
Document 3:::
Unrooted binary tree
A free tree or unrooted tree is a connected undirected graph with no cycles. The vertices with one neighbor are the leaves of the tree, and the remaining vertices are the internal nodes of the tree. The degree of a vertex is its number of neighbors; in a tree with more than one node, the leaves are the vertices of degree one. An unrooted binary tree is a free tree in which all internal nodes have degree exactly three.
Document 4:::
Leaf
A leaf (PL: leaves) is a principal appendage of the stem of a vascular plant, usually borne laterally aboveground and specialized for photosynthesis. Leaves are collectively called foliage, as in "autumn foliage", while the leaves, stem, flower, and fruit collectively form the shoot system. In most leaves, the primary photosynthetic tissue is the palisade mesophyll and is located on the upper side of the blade or lamina of the leaf but in some species, including the mature foliage of Eucalyptus, palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features.
Document 5:::
Good spanning tree
In the mathematical field of graph theory, a good spanning tree T {\displaystyle T} of an embedded planar graph G {\displaystyle G} is a rooted spanning tree of G {\displaystyle G} whose non-tree edges satisfy the following conditions. there is no non-tree edge ( u , v ) {\displaystyle (u,v)} where u {\displaystyle u} and v {\displaystyle v} lie on a path from the root of T {\displaystyle T} to a leaf, the edges incident to a vertex v {\displaystyle v} can be divided by three sets X v , Y v {\displaystyle X_{v},Y_{v}} and Z v {\displaystyle Z_{v}} , where, X v {\displaystyle X_{v}} is a set of non-tree edges, they terminate in red zone Y v {\displaystyle Y_{v}} is a set of tree edges, they are children of v {\displaystyle v} Z v {\displaystyle Z_{v}} is a set of non-tree edges, they terminate in green zone |
epfl-collab | Which statement is correct? | ['The Viterbi algorithm works because it is applied to an HMM model that makes an independence assumption on the word dependencies in sentences', 'The Viterbi algorithm works because words are independent in a sentence', 'The Viterbi algorithm works because it makes an independence assumption on the word dependencies in sentences', 'The Viterbi algorithm works because it is applied to an HMM model that captures independence of words in a sentence'] | A | null | Document 1:::
Statement (logic)
In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence).
Document 2:::
Statement (logic)
In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement is distinct from a sentence in that a sentence is only one formulation of a statement, whereas there may be many other formulations expressing the same statement. By a statement, I mean "that which one states", not one's stating of it. There are many interpretations of what the term statement means, but generally, it indicates either a meaningful declarative sentence that is either true or false (bivalence).
Document 3:::
Atomic fact
In logic and analytic philosophy, an atomic sentence is a type of declarative sentence which is either true or false (may also be referred to as a proposition, statement or truthbearer) and which cannot be broken down into other simpler sentences. For example, "The dog ran" is an atomic sentence in natural language, whereas "The dog ran and the cat hid" is a molecular sentence in natural language. From a logical analysis point of view, the truth or falsity of sentences in general is determined by only two things: the logical form of the sentence and the truth or falsity of its simple sentences. This is to say, for example, that the truth of the sentence "John is Greek and John is happy" is a function of the meaning of "and", and the truth values of the atomic sentences "John is Greek" and "John is happy".
Document 4:::
Statement (logic)
A proposition is an assertion that is made by (i.e., the meaning of) a true or false declarative sentence. A proposition is what a statement means, it is the notion or idea that a statement expresses, i.e., what it represents. It could be said that "2 + 2 = 4" and "two plus two equals four" are two different statements that are expressing the same proposition in two different ways.
Document 5:::
Statement (logic)
A proposition is an assertion that is made by (i.e., the meaning of) a true or false declarative sentence. A proposition is what a statement means, it is the notion or idea that a statement expresses, i.e., what it represents. It could be said that "2 + 2 = 4" and "two plus two equals four" are two different statements that are expressing the same proposition in two different ways. |
epfl-collab | Which of the following is WRONG about inverted files? (Slide 24,28 Week 3) | ['Variable length compression is used to reduce the size of the index file', 'The space requirement for the postings file is O(n)', 'Storing differences among word addresses reduces the size of the postings file', 'The index file has space requirement of O(n^beta), where beta is about 1⁄2'] | A | null | Document 1:::
Invert error
In philately, an invert error occurs when part of a stamp is printed upside-down. Inverts are perhaps the most spectacular of postage stamp errors, not only because of their striking visual appearance, but because some are quite rare, and highly valued by stamp collectors.
Document 2:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines.
Document 3:::
Inverted index
Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204. There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Document 4:::
Inversion (discrete mathematics)
In computer science and discrete mathematics, an inversion in a sequence is a pair of elements that are out of their natural order.
Document 5:::
Klein configuration
45 . 46 . 56 and their reverses. |
epfl-collab | In User-Based Collaborative Filtering, which of the following is TRUE? | ['Pearson Correlation Coefficient and Cosine Similarity have different value ranges, but return the same similarity ranking for the users', 'Pearson Correlation Coefficient and Cosine Similarity have the same value range but can return different similarity rankings for the users', 'Pearson Correlation Coefficient and Cosine Similarity have the same value range and return the same similarity ranking for the users.', 'Pearson Correlation Coefficient and Cosine Similarity have different value ranges and can return different similarity rankings for the users'] | D | null | Document 1:::
GroupLens Research
GroupLens Research is a human–computer interaction research lab in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities specializing in recommender systems and online communities. GroupLens also works with mobile and ubiquitous technologies, digital libraries, and local geographic information systems. The GroupLens lab was one of the first to study automated recommender systems with the construction of the "GroupLens" recommender, a Usenet article recommendation engine, and MovieLens, a popular movie recommendation site used to study recommendation engines, tagging systems, and user interfaces. The lab has also gained notability for its members' work studying open content communities such as Cyclopath, a geo-wiki that was used in the Twin Cities to help plan the regional cycling system.
Document 2:::
Evaluation measures (information retrieval)
Evaluation measures may be categorised in various ways including offline or online, user-based or system-based and include methods such as observed user behaviour, test collections, precision and recall, and scores from prepared benchmark test sets. Evaluation for an information retrieval system should also include a validation of the measures used, i.e. an assessment of how well they measure what they are intended to measure and how well the system fits its intended use case. Measures are generally used in two settings: online experimentation, which assesses users' interactions with the search system, and offline evaluation, which measures the effectiveness of an information retrieval system on a static offline collection.
Document 3:::
Advertising on social networks
Important factors also include what the user likes, comments on, views, and follows on social media platforms. With social media targeting, advertisements are distributed to users based on information gathered from target group profiles. Social network advertising is not necessarily the same as social media targeting.
Document 4:::
HITS algorithm
Hyperlink-Induced Topic Search (HITS; also known as hubs and authorities) is a link analysis algorithm that rates Web pages, developed by Jon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represents a page that pointed to many other pages, while a good authority represents a page that is linked by many different hubs.The scheme therefore assigns two scores for each page: its authority, which estimates the value of the content of the page, and its hub value, which estimates the value of its links to other pages.
Document 5:::
Algorithmic bias
For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased. : 332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on). |
epfl-collab | Which of the following is TRUE for Recommender Systems (RS)? | ['Matrix Factorization can predict a score for any user-item combination in the dataset.', 'Matrix Factorization is typically robust to the cold-start problem.', 'The complexity of the Content-based RS depends on the number of users', 'Item-based RS need not only the ratings but also the item features'] | A | null | Document 1:::
Evaluation measures (information retrieval)
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 2:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The success of an IR system may be judged by a range of criteria including relevance, speed, user satisfaction, usability, efficiency and reliability. However, the most important factor in determining a system's effectiveness for users is the overall relevance of results retrieved in response to a query.
Document 3:::
Comparison of research networking tools and research profiling systems
They also differ from social networking systems in that they represent a compendium of data ingested from authoritative and verifiable sources rather than predominantly individually-posted information, making RN tools more reliable. Yet, RN tools have sufficient flexibility to allow for profile editing. RN tools provide resources to bolster human connections: they can make non-intuitive matches, do not depend on serendipity and do not have a propensity to return only to previously identified collaborations/collaborators.
Document 4:::
Comparison of research networking tools and research profiling systems
They also differ from social networking systems in that they represent a compendium of data ingested from authoritative and verifiable sources rather than predominantly individually-posted information, making RN tools more reliable. Yet, RN tools have sufficient flexibility to allow for profile editing. RN tools provide resources to bolster human connections: they can make non-intuitive matches, do not depend on serendipity and do not have a propensity to return only to previously identified collaborations/collaborators.
Document 5:::
Classifier system
Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions (e.g. behavior modeling, classification, data mining, regression, function approximation, or game strategy). This approach allows complex solution spaces to be broken up into smaller, simpler parts. The founding concepts behind learning classifier systems came from attempts to model complex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e. artificial intelligence). |
epfl-collab | Which of the following properties is part of the RDF Schema Language? | ['Predicate', 'Domain', 'Description', 'Type'] | B | null | Document 1:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 2:::
XML schema
An XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntactical constraints imposed by XML itself. These constraints are generally expressed using some combination of grammatical rules governing the order of elements, Boolean predicates that the content must satisfy, data types governing the content of elements and attributes, and more specialized rules such as uniqueness and referential integrity constraints. There are languages developed specifically to express XML schemas. The document type definition (DTD) language, which is native to the XML specification, is a schema language that is of relatively limited capability, but that also has other uses in XML aside from the expression of schemas.
Document 3:::
Logical schema
A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology.
Document 4:::
NGSI-LD
The NGSI-LD information model represents Context Information as entities that have properties and relationships to other entities. It is derived from property graphs, with semantics formally defined on the basis of RDF and the semantic web framework. It can be serialized using JSON-LD. Every entity and relationship is given a unique IRI reference as identifier, making the corresponding data exportable as linked data datasets. The -LD suffix denotes this affiliation to the linked data universe.
Document 5:::
XML schema
DTDs are perhaps the most widely supported schema language for XML. Because DTDs are one of the earliest schema languages for XML, defined before XML even had namespace support, they are widely supported. Internal DTDs are often supported in XML processors; external DTDs are less often supported, but only slightly. Most large XML parsers, ones that support multiple XML technologies, will provide support for DTDs as well. |
epfl-collab | Which of the following is correct regarding crowdsourcing? | ['The accuracy of majority voting is never equal to the one of Expectation Maximization.', 'Uniform spammers randomly select answers.', 'Honey pots can detect uniform spammers, random spammers and sloppy workers.', 'Majority Decision and Expectation Maximization both give less weight to spammers’ answers.'] | C | null | Document 1:::
Crowd sourcing
Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model." Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.
Document 2:::
Crowd sourcing
The term crowdsourcing was coined in 2006 by two editors at Wired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing". Howe published a definition for the term in a blog post in June 2006: Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.
Document 3:::
Crowd sourcing
Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may be praise or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time, from experts, or from small businesses.
Document 4:::
Crowdsourcing as human-machine translation
The use of crowdsourcing and text corpus in human-machine translation (HMT) within the last few years have become predominant in their area, in comparison to solely using machine translation (MT). There have been a few recent academic journals looking into the benefits that using crowdsourcing as a translation technique could bring to the current approach to the task and how it could help improve and make more efficient the current tools available to the public.
Document 5:::
Crowd Supply
Crowd Supply is a crowdfunding platform based in Portland, Oregon. The platform has claimed "over twice the success rate of Kickstarter and Indiegogo", and partners with creators who use it, providing mentorship resembling a business incubator.Some see Crowd Supply's close management of projects as the solution to the fulfillment failure rate of other crowdfunding platforms. The site also serves as an online store for the inventories of successful campaigns.Notable projects from the platform include Andrew Huang's Novena, an open-source laptop. |
epfl-collab | Given the 2-itemsets {1, 2}, {1, 3}, {1, 5}, {2, 3}, {2, 5}, when generating the 3-itemset we will: | ['Have 3 3-itemsets after the join and 3 3-itemsets after the prune', 'Have 4 3-itemsets after the join and 2 3-itemsets after the prune', 'Have 2 3-itemsets after the join and 2 3-itemsets after the prune', 'Have 4 3-itemsets after the join and 4 3-itemsets after the prune'] | B | null | Document 1:::
GSP algorithm
From the frequent items, a set of candidate 2-sequences are formed, and another pass is made to identify their frequency. The frequent 2-sequences are used to generate the candidate 3-sequences, and this process is repeated until no more frequent sequences are found. There are two main steps in the algorithm.
Document 2:::
1/3–2/3 conjecture
If a finite partially ordered set is totally ordered, then it has only one linear extension, but otherwise it will have more than one. The 1/3–2/3 conjecture states that one can choose two elements x and y such that, among this set of possible linear extensions, between 1/3 and 2/3 of them place x earlier than y, and symmetrically between 1/3 and 2/3 of them place y earlier than x. There is an alternative and equivalent statement of the 1/3–2/3 conjecture in the language of probability theory. One may define a uniform probability distribution on the linear extensions in which each possible linear extension is equally likely to be chosen. The 1/3–2/3 conjecture states that, under this probability distribution, there exists a pair of elements x and y such that the probability that x is earlier than y in a random linear extension is between 1/3 and 2/3.In 1984, Jeff Kahn and Michael Saks defined δ(P), for any partially ordered set P, to be the largest real number δ such that P has a pair x, y with x earlier than y in a number of linear extensions that is between δ and 1 − δ of the total number of linear extensions. In this notation, the 1/3–2/3 conjecture states that every finite partial order that is not total has δ(P) ≥ 1/3.
Document 3:::
3-dimensional matching
A 2-dimensional matching can be defined in a completely analogous manner. Let X and Y be finite sets, and let T be a subset of X × Y. Now M ⊆ T is a 2-dimensional matching if the following holds: for any two distinct pairs (x1, y1) ∈ M and (x2, y2) ∈ M, we have x1 ≠ x2 and y1 ≠ y2. In the case of 2-dimensional matching, the set T can be interpreted as the set of edges in a bipartite graph G = (X, Y, T); each edge in T connects a vertex in X to a vertex in Y. A 2-dimensional matching is then a matching in the graph G, that is, a set of pairwise non-adjacent edges. Hence 3-dimensional matchings can be interpreted as a generalization of matchings to hypergraphs: the sets X, Y, and Z contain the vertices, each element of T is a hyperedge, and the set M consists of pairwise non-adjacent edges (edges that do not have a common vertex). In case of 2-dimensional matching, we have Y = Z.
Document 4:::
2–3 heap
In computer science, a 2–3 heap is a data structure, a variation on the heap, designed by Tadao Takaoka in 1999. The structure is similar to the Fibonacci heap, and borrows from the 2–3 tree. Time costs for some common heap operations are: Delete-min takes O ( log ( n ) ) {\displaystyle O(\log(n))} amortized time. Decrease-key takes constant amortized time. Insertion takes constant amortized time.
Document 5:::
Tompkins–Paige algorithm
The Tompkins–Paige algorithm is a computer algorithm for generating all permutations of a finite set of objects. |
epfl-collab | When using bootstrapping in Random Forests, the number of different data items used to construct a single tree is: | ['Of order square root of the size of the training set with high probability', 'Smaller than the size of the training data set with high probability', 'The same as the size of the training data set', 'Depends on the outcome of the sampling process, and can be both smaller or larger than the training set'] | B | null | Document 1:::
Recursive partitioning
Well known methods of recursive partitioning include Ross Quinlan's ID3 algorithm and its successors, C4.5 and C5.0 and Classification and Regression Trees (CART). Ensemble learning methods such as Random Forests help to overcome a common criticism of these methods – their vulnerability to overfitting of the data – by employing different algorithms and combining their output in some way. This article focuses on recursive partitioning for medical diagnostic tests, but the technique has far wider applications.
Document 2:::
Random tree
In mathematics and computer science, a random tree is a tree or arborescence that is formed by a stochastic process. Types of random trees include: Uniform spanning tree, a spanning tree of a given graph in which each different tree is equally likely to be selected Random minimal spanning tree, spanning trees of a graph formed by choosing random edge weights and using the minimum spanning tree for those weights Random binary tree, binary trees with a given number of nodes, formed by inserting the nodes in a random order or by selecting all possible trees uniformly at random Random recursive tree, increasingly labelled trees, which can be generated using a simple stochastic growth rule. Treap or randomized binary search tree, a data structure that uses random choices to simulate a random binary tree for non-random update sequences Rapidly exploring random tree, a fractal space-filling pattern used as a data structure for searching high-dimensional spaces Brownian tree, a fractal tree structure created by diffusion-limited aggregation processes Random forest, a machine-learning classifier based on choosing random subsets of variables for each tree and using the most frequent tree output as the overall classification Branching process, a model of a population in which each individual has a random number of children
Document 3:::
Classification and regression tree
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
Document 4:::
Item tree analysis
Item tree analysis (ITA) is a data analytical method which allows constructing a hierarchical structure on the items of a questionnaire or test from observed response patterns. Assume that we have a questionnaire with m items and that subjects can answer positive (1) or negative (0) to each of these items, i.e. the items are dichotomous. If n subjects answer the items this results in a binary data matrix D with m columns and n rows. Typical examples of this data format are test items which can be solved (1) or failed (0) by subjects.
Document 5:::
Decision tree pruning
A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance. |
epfl-collab | To constrain an object of an RDF statement from being of an atomic type (e.g., String), one has to use the following RDF/RDFS property: | ['rdf:type', 'rdfs:subClassOf', 'rdfs:range', 'rdfs:domain'] | C | null | Document 1:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 2:::
Atomicity (database systems)
In database systems, atomicity (; from Ancient Greek: ἄτομος, romanized: átomos, lit. 'undividable') is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. An atomic transaction is an indivisible and irreducible series of database operations such that either all occurs, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright.
Document 3:::
Strongly typed identifier
It is common for implementations to handle equality testing, serialization and model binding. The strongly typed identifier commonly wraps the data type used as the primary key in the database, such as a string, an integer or universally unique identifier (UUID). Web frameworks can often be configured to model bind properties on view models that are strongly typed identifiers. Object–relational mappers can often be configured with value converters to map data between the properties on a model using strongly typed identifier data types and database columns.
Document 4:::
Abstract base class
Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rational is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility.
Document 5:::
Abstract base class
The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it. A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. |
epfl-collab | What is a correct pruning strategy for decision tree induction? | ['Apply Maximum Description Length principle', 'Choose the model that maximizes L(M) + L(M|D)', 'Stop partitioning a node when either positive or negative samples dominate the samples of the other class', 'Remove attributes with lowest information gain'] | C | null | Document 1:::
Decision tree pruning
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples.
Document 2:::
Decision tree pruning
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples.
Document 3:::
Decision tree pruning
A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Document 4:::
Decision tree pruning
A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Document 5:::
Classification and regression tree
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making). |
epfl-collab | In the first pass over the database of the FP Growth algorithm | ['A tree structure is constructed', 'Prefixes among itemsets are determined', 'The frequency of items is computed', 'Frequent itemsets are extracted'] | C | null | Document 1:::
GSP algorithm
Candidate Generation. Given the set of frequent (k-1)-frequent sequences Fk-1, the candidates for the next pass are generated by joining F(k-1) with itself.
Document 2:::
GSP algorithm
This process requires one pass over the whole database. GSP algorithm makes multiple database passes. In the first pass, all single items (1-sequences) are counted.
Document 3:::
Growing self-organizing map
A growing self-organizing map (GSOM) is a growing variant of a self-organizing map (SOM). The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the value called Spread Factor (SF), the data analyst has the ability to control the growth of the GSOM.
Document 4:::
Growth function
The growth function, also called the shatter coefficient or the shattering number, measures the richness of a set family. It is especially used in the context of statistical learning theory, where it measures the complexity of a hypothesis class. The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties. It is a basic concept in machine learning.
Document 5:::
Depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually a stack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph. A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes. |
epfl-collab | Your input is "Distributed Information Systems". Your model tries to predict "Distributed" and "Systems" by leveraging the fact that these words are in the neighborhood of "Information". This model can be: | ['Word Embeddings', 'LDA', 'kNN', 'Bag of Words'] | A | null | Document 1:::
Distributed cognition
These representation-based frameworks consider distributed cognition as "a cognitive system whose structures and processes are distributed between internal and external representations, across a group of individuals, and across space and time" (Zhang and Patel, 2006). In general terms, they consider a distributed cognition system to have two components: internal and external representations. In their description, internal representations are knowledge and structure in individuals' minds while external representations are knowledge and structure in the external environment (Zhang, 1997b; Zhang and Norman, 1994).
Document 2:::
Distributed cognition
Hutchins' distributed cognition theory explains mental processes by taking as the fundamental unit of analysis "a collection of individuals and artifacts and their relations to each other in a particular work practice". "DCog" is a specific approach to distributed cognition (distinct from other meanings) which takes a computational perspective towards goal-based activity systems.The distributed cognition approach uses insights from cultural anthropology, sociology, embodied cognitive science, and the psychology of Lev Vygotsky (cf. cultural-historical psychology).
Document 3:::
Distributed cognition
DCog studies the ways that memories, facts, or knowledge is embedded in the objects, individuals, and tools in our environment. DCog is a useful approach for designing the technologically mediated social aspects of cognition by putting emphasis on the individual and his/her environment, and the media channels with which people interact, either in order to communicate with each other, or socially coordinate to perform complex tasks. Distributed cognition views a system of cognition as a set of representations propagated through specific media, and models the interchange of information between these representational media.
Document 4:::
Competition model
The Competition Model is a psycholinguistic theory of language acquisition and sentence processing, developed by Elizabeth Bates and Brian MacWhinney (1982). The claim in MacWhinney, Bates, and Kliegl (1984) is that "the forms of natural languages are created, governed, constrained, acquired, and used in the service of communicative functions." Furthermore, the model holds that processing is based on an online competition between these communicative functions or motives.
Document 5:::
Sparse Distributed Memory
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center.This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts ( 2 1000 {\displaystyle 2^{1000}} bits) of information without focusing on the accuracy but on similarity of information. There are some recent applications in robot navigation and experience-based robot manipulation. |
epfl-collab | Considering the transaction below, which one is WRONG?
|Transaction ID |Items Bought|
|--|--|
|1|Tea|
|2|Tea, Yoghurt|
|3|Tea, Yoghurt, Kebap|
|4 |Kebap |
|5|Tea, Kebap| | ['{Yoghurt} -> {Kebab} has 50% confidence', '{Tea} has the highest support', '{Yoghurt, Kebap} has 20% support', '{Yoghurt} has the lowest support among all itemsets'] | D | null | Document 1:::
Multi-party fair exchange protocol
Matthew K. Franklin and Gene Tsudik suggested in 1998 the following classification: An n {\displaystyle n} -party single-unit general exchange is a permutation σ {\displaystyle \sigma } on { 1... n } {\displaystyle \{1...n\}} , where each party P i {\displaystyle P_{i}} offers a single unit of commodity K i {\displaystyle K_{i}} to P σ ( i ) {\displaystyle P_{\sigma (i)}} , and receives a single unit of commodity K σ − 1 ( i ) {\displaystyle K_{\sigma ^{-1}(i)}} from P σ − 1 ( i ) {\displaystyle P_{\sigma ^{-1}(i)}} . An n {\displaystyle n} -party multi-unit general exchange is a matrix of baskets, where the entry B i j {\displaystyle B_{ij}} in row i {\displaystyle i} and column j {\displaystyle j} is the basket of goods given by P i {\displaystyle P_{i}} to P j {\displaystyle P_{j}} .
Document 2:::
Atomicity (database systems)
As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress). An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither lost nor created if either of those two operations fails.The same term is also used in the definition of First normal form in database systems, where it instead refers to the concept that the values for fields may not consist of multiple smaller value to be decomposed, such as a string into which multiple names, numbers, dates, or other types may be packed.
Document 3:::
Transaction data
Transaction data or transaction information is a category of data describing transactions. Transaction data/information gather variables generally referring to reference data or master data – e.g. dates, times, time zones, currencies. Typical transactions are: Financial transactions about orders, invoices, payments; Work transactions about plans, activity records; Logistic transactions about deliveries, storage records, travel records, etc..
Document 4:::
Double counting (accounting)
Double counting in accounting is an error whereby a transaction is counted more than once, for whatever reason. But in social accounting it also refers to a conceptual problem in social accounting practice, when the attempt is made to estimate the new value added by Gross Output, or the value of total investments.
Document 5:::
Transaction processing
However, if a single operation in the series fails during the exchange, the entire exchange fails. You do not get the book and the bookstore does not get your money. The technology responsible for making the exchange balanced and predictable is called transaction processing. |
epfl-collab | Which is an appropriate method for fighting skewed distributions of class labels in
classification? | ['Generate artificial data points for the most frequent classes', 'Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels', 'Include an over-proportional number of samples from the larger class', 'Use leave-one-out cross validation'] | B | null | Document 1:::
Multi-label classification
In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (more than two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y.
Document 2:::
Multiclass classifier
In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). While many classification algorithms (notably multinomial logistic regression) naturally permit the use of more than two classes, some are by nature binary algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies. Multiclass classification should not be confused with multi-label classification, where multiple labels are to be predicted for each instance.
Document 3:::
Loss functions for classification
In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given X {\displaystyle {\mathcal {X}}} as the space of all possible inputs (usually X ⊂ R d {\displaystyle {\mathcal {X}}\subset \mathbb {R} ^{d}} ), and Y = { − 1 , 1 } {\displaystyle {\mathcal {Y}}=\{-1,1\}} as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function f: X → Y {\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}} which best predicts a label y {\displaystyle y} for a given input x → {\displaystyle {\vec {x}}} . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same x → {\displaystyle {\vec {x}}} to generate different y {\displaystyle y} . As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as I = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y {\displaystyle I=\displaystyle \int _{{\mathcal {X}}\times {\mathcal {Y}}}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy} where V ( f ( x → ) , y ) {\displaystyle V(f({\vec {x}}),y)} is a given loss function, and p ( x → , y ) {\displaystyle p({\vec {x}},y)} is the probability density function of the process that generated the data, which can equivalently be written as p ( x → , y ) = p ( y ∣ x → ) p ( x → ) .
Document 4:::
Classification algorithm
This category is about statistical classification algorithms. For more information, see Statistical classification.
Document 5:::
One-class classification
In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class, although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes, motor failure prediction, or the operational status of a nuclear plant as 'normal': In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known. While many of the above approaches focus on the case of removing a small number of outliers or anomalies, one can also learn the other extreme, where the single class covers a small coherent subset of the data, using an information bottleneck approach. |
epfl-collab | Consider the following set of frequent 3-itemsets: {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {2, 3, 4}, {2, 3, 5}, {3, 4, 5}. Which one is not a candidate 4-itemset? | ['{2,3,4,5}', '{1,2,3,4}', '{1,3,4,5} ', '{1,2,4,5}'] | C | null | Document 1:::
GSP algorithm
From the frequent items, a set of candidate 2-sequences are formed, and another pass is made to identify their frequency. The frequent 2-sequences are used to generate the candidate 3-sequences, and this process is repeated until no more frequent sequences are found. There are two main steps in the algorithm.
Document 2:::
Affinity analysis
The first step in the process is to count the co-occurrence of attributes in the data set. Next, a subset is created called the frequent itemset. The association rules mining takes the form of if a condition or feature (A) is present then another condition or feature (B) exists.
Document 3:::
Affinity analysis
The first step in the process is to count the co-occurrence of attributes in the data set. Next, a subset is created called the frequent itemset. The association rules mining takes the form of if a condition or feature (A) is present then another condition or feature (B) exists.
Document 4:::
GSP algorithm
Candidate Generation. Given the set of frequent (k-1)-frequent sequences Fk-1, the candidates for the next pass are generated by joining F(k-1) with itself.
Document 5:::
Numerical 3-dimensional matching
Numerical 3-dimensional matching is an NP-complete decision problem. It is given by three multisets of integers X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} , each containing k {\displaystyle k} elements, and a bound b {\displaystyle b} . The goal is to select a subset M {\displaystyle M} of X × Y × Z {\displaystyle X\times Y\times Z} such that every integer in X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} occurs exactly once and that for every triple ( x , y , z ) {\displaystyle (x,y,z)} in the subset x + y + z = b {\displaystyle x+y+z=b} holds. This problem is labeled as in. |
epfl-collab | Which of the following is true in the context of inverted files? | ['The finer the addressing granularity used in documents, the smaller the posting file becomes', 'Inverted files are optimized for supporting search on dynamic text collections', 'Index merging compresses an inverted file index on disk and reduces the storage cost', 'The trie structure used for index construction is also used as a data access structure to terms in the vocabulary'] | D | null | Document 1:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines.
Document 2:::
Inverted index
Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204. There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Document 3:::
Invert error
In philately, an invert error occurs when part of a stamp is printed upside-down. Inverts are perhaps the most spectacular of postage stamp errors, not only because of their striking visual appearance, but because some are quite rare, and highly valued by stamp collectors.
Document 4:::
Inversion (discrete mathematics)
In computer science and discrete mathematics, an inversion in a sequence is a pair of elements that are out of their natural order.
Document 5:::
Loop inversion
In computer science, loop inversion is a compiler optimization and loop transformation in which a while loop is replaced by an if block containing a do..while loop. When used correctly, it may improve performance due to instruction pipelining. |
epfl-collab | Regarding Label Propagation, which of the following is false? | ['Injection probability should be higher when labels are obtained from experts than by crowdworkers', '\xa0Propagation of labels through high degree nodes are penalized by low abandoning probability', 'It can be interpreted as a random walk model', 'The labels are inferred using the labels that are known apriori'] | B | null | Document 1:::
Rooted tree
A labeled tree is a tree in which each vertex is given a unique label. The vertices of a labeled tree on n vertices (for nonnegative integers n) are typically given the labels 1, 2, …, n. A recursive tree is a labeled rooted tree where the vertex labels respect the tree order (i.e., if u < v for two vertices u and v, then the label of u is smaller than the label of v). In a rooted tree, the parent of a vertex v is the vertex connected to v on the path to the root; every vertex has a unique parent, except the root has no parent.
Document 2:::
Propagating chain
Chain propagation (sometimes referred to as propagation) is a process in which a reactive intermediate is continuously regenerated during the course of a chemical chain reaction. For example, in the chlorination of methane, there is a two-step propagation cycle involving as chain carriers a chlorine atom and a methyl radical which are regenerated alternately: •Cl + CH4 → HCl + •CH3 •CH3 + Cl2 → CH3Cl + •ClThe two steps add to give the equation for the overall chain reaction: CH4 + Cl2 → CH3Cl + HCl.
Document 3:::
Interval propagation
In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints (i.e., equations or inequalities). It can be used to propagate uncertainties in the situation where errors are represented by intervals. Interval propagation considers an estimation problem as a constraint satisfaction problem.
Document 4:::
Multi-label classification
In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (more than two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y.
Document 5:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field. |
epfl-collab | The type statement in RDF would be expressed in the relational data model by a table | ['with one attribute', 'with two attributes', 'with three attributes', 'cannot be expressed in the relational data model'] | A | null | Document 1:::
Relational Model
Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.
Document 2:::
Relational Model
The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database. The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Document 3:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 4:::
Logical schema
A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology.
Document 5:::
Oracle NoSQL Database
NoSQL Database supports tabular model. Each row is identified by a unique key, and has a value, of arbitrary length, which is interpreted by the application. The application can manipulate (insert, delete, update, read) a single row in a transaction. The application can also perform an iterative, non-transactional scan of all the rows in the database. |
epfl-collab | Given graph 1→2, 1→3, 2→3, 3→2, switching from Page Rank to Teleporting PageRank will have an influence on the value(s) of: | ['Node 2 and 3', 'Node 1', 'All the nodes', 'No nodes. The values will stay unchanged.'] | C | null | Document 1:::
PageRank
PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is.
Document 2:::
Web graph
The webgraph describes the directed links between pages of the World Wide Web. A graph, in general, consists of several vertices, some pairs connected by edges. In a directed graph, edges are directed lines or arcs. The webgraph is a directed graph, whose vertices correspond to the pages of the WWW, and a directed edge connects page X to page Y if there exists a hyperlink on page X, referring to page Y.
Document 3:::
Traversed edges per second
The number of traversed edges per second (TEPS) that can be performed by a supercomputer cluster is a measure of both the communications capabilities and computational power of the machine. This is in contrast to the more standard metric of floating-point operations per second (FLOPS), which does not give any weight to the communication capabilities of the machine. The term first entered usage in 2010 with the advent of petascale computing, and has since been measured for many of the world's largest supercomputers.In this context, an edge is a connection between two vertices on a graph, and the traversal is the ability of the machine to communicate data between these two points.
Document 4:::
Random geometric graph
Additionally, random geometric graphs display degree assortativity according to their spatial dimension: "popular" nodes (those with many links) are particularly likely to be linked to other popular nodes. A real-world application of RGGs is the modeling of ad hoc networks. Furthermore they are used to perform benchmarks for graph algorithms.
Document 5:::
Closeness (graph theory)
Centrality indices are answers to the question "What characterizes an important vertex?" The answer is given in terms of a real-valued function on the vertices of a graph, where the values produced are expected to provide a ranking which identifies the most important nodes.The word "importance" has a wide number of meanings, leading to many different definitions of centrality. Two categorization schemes have been proposed. "Importance" can be conceived in relation to a type of flow or transfer across the network. |
epfl-collab | Which of the following is true regarding the random forest classification algorithm? | ['We compute a prediction by randomly selecting the decision of one weak learner.', 'It uses only a subset of features for learning in each weak learner.', 'It produces a human interpretable model.', 'It is not suitable for parallelization.'] | B | null | Document 1:::
Classification and regression tree
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
Document 2:::
Classification algorithm
This category is about statistical classification algorithms. For more information, see Statistical classification.
Document 3:::
Random tree
In mathematics and computer science, a random tree is a tree or arborescence that is formed by a stochastic process. Types of random trees include: Uniform spanning tree, a spanning tree of a given graph in which each different tree is equally likely to be selected Random minimal spanning tree, spanning trees of a graph formed by choosing random edge weights and using the minimum spanning tree for those weights Random binary tree, binary trees with a given number of nodes, formed by inserting the nodes in a random order or by selecting all possible trees uniformly at random Random recursive tree, increasingly labelled trees, which can be generated using a simple stochastic growth rule. Treap or randomized binary search tree, a data structure that uses random choices to simulate a random binary tree for non-random update sequences Rapidly exploring random tree, a fractal space-filling pattern used as a data structure for searching high-dimensional spaces Brownian tree, a fractal tree structure created by diffusion-limited aggregation processes Random forest, a machine-learning classifier based on choosing random subsets of variables for each tree and using the most frequent tree output as the overall classification Branching process, a model of a population in which each individual has a random number of children
Document 4:::
Recursive partitioning
Well known methods of recursive partitioning include Ross Quinlan's ID3 algorithm and its successors, C4.5 and C5.0 and Classification and Regression Trees (CART). Ensemble learning methods such as Random Forests help to overcome a common criticism of these methods – their vulnerability to overfitting of the data – by employing different algorithms and combining their output in some way. This article focuses on recursive partitioning for medical diagnostic tests, but the technique has far wider applications.
Document 5:::
C4.5 algorithm
C4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as a statistical classifier. In 2011, authors of the Weka machine learning software described the C4.5 algorithm as "a landmark decision tree program that is probably the machine learning workhorse most widely used in practice to date".It became quite popular after ranking #1 in the Top 10 Algorithms in Data Mining pre-eminent paper published by Springer LNCS in 2008. |
epfl-collab | Which of the following properties is part of the RDF Schema Language? | ['Predicate', 'Description', 'Domain', 'Type'] | C | null | Document 1:::
SPARQL
SPARQL (pronounced "sparkle" , a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013.SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery.
Document 2:::
XML schema
An XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntactical constraints imposed by XML itself. These constraints are generally expressed using some combination of grammatical rules governing the order of elements, Boolean predicates that the content must satisfy, data types governing the content of elements and attributes, and more specialized rules such as uniqueness and referential integrity constraints. There are languages developed specifically to express XML schemas. The document type definition (DTD) language, which is native to the XML specification, is a schema language that is of relatively limited capability, but that also has other uses in XML aside from the expression of schemas.
Document 3:::
Logical schema
A logical data model or logical schema is a data model of a specific problem domain expressed independently of a particular database management product or storage technology (physical data model) but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags. This is as opposed to a conceptual data model, which describes the semantics of an organization without reference to technology.
Document 4:::
NGSI-LD
The NGSI-LD information model represents Context Information as entities that have properties and relationships to other entities. It is derived from property graphs, with semantics formally defined on the basis of RDF and the semantic web framework. It can be serialized using JSON-LD. Every entity and relationship is given a unique IRI reference as identifier, making the corresponding data exportable as linked data datasets. The -LD suffix denotes this affiliation to the linked data universe.
Document 5:::
XML schema
DTDs are perhaps the most widely supported schema language for XML. Because DTDs are one of the earliest schema languages for XML, defined before XML even had namespace support, they are widely supported. Internal DTDs are often supported in XML processors; external DTDs are less often supported, but only slightly. Most large XML parsers, ones that support multiple XML technologies, will provide support for DTDs as well. |
epfl-collab | How does matrix factorization address the issue of missing ratings? | ['It maps ratings into a lower-dimensional space', 'It uses regularization of the rating matrix', 'It sets missing ratings to zero', 'It performs gradient descent only for existing ratings'] | D | null | Document 1:::
Imputation (statistics)
Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.
Document 2:::
Matrix factorization (algebra)
In homological algebra, a branch of mathematics, a matrix factorization is a tool used to study infinitely long resolutions, generally over commutative rings.
Document 3:::
Multifactor dimensionality reduction
Multifactor dimensionality reduction (MDR) is a statistical approach, also used in machine learning automatic approaches, for detecting and characterizing combinations of attributes or independent variables that interact to influence a dependent or class variable. MDR was designed specifically to identify nonadditive interactions among discrete variables that influence a binary outcome and is considered a nonparametric and model-free alternative to traditional statistical methods such as logistic regression. The basis of the MDR method is a constructive induction or feature engineering algorithm that converts two or more variables or attributes to a single attribute. This process of constructing a new attribute changes the representation space of the data. The end goal is to create or discover a representation that facilitates the detection of nonlinear or nonadditive interactions among the attributes such that prediction of the class variable is improved over that of the original representation of the data.
Document 4:::
Multifactor dimensionality reduction
Multifactor dimensionality reduction (MDR) is a statistical approach, also used in machine learning automatic approaches, for detecting and characterizing combinations of attributes or independent variables that interact to influence a dependent or class variable. MDR was designed specifically to identify nonadditive interactions among discrete variables that influence a binary outcome and is considered a nonparametric and model-free alternative to traditional statistical methods such as logistic regression. The basis of the MDR method is a constructive induction or feature engineering algorithm that converts two or more variables or attributes to a single attribute. This process of constructing a new attribute changes the representation space of the data. The end goal is to create or discover a representation that facilitates the detection of nonlinear or nonadditive interactions among the attributes such that prediction of the class variable is improved over that of the original representation of the data.
Document 5:::
Confusion matrix
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one; in unsupervised learning it is usually called a matching matrix. Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature. The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e. commonly mislabeling one as another). It is a special kind of contingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table). |
epfl-collab | When constructing a word embedding, negative samples are | ['only words that never appear as context word', 'context words that are not part of the vocabulary of the document collection', 'all less frequent words that do not occur in the context of a given word', 'word - context word combinations that are not occurring in the document collection'] | D | null | Document 1:::
Precision and recall
For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity.
Document 2:::
Winnow (algorithm)
During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane that can then be used to label novel examples as positive or negative. The algorithm can also be used in the online learning setting, where the learning and the classification phase are not clearly separated.
Document 3:::
GloVe (machine learning)
GloVe, coined from Global Vectors, is a model for distributed word representation. The model is an unsupervised learning algorithm for obtaining vector representations for words. This is achieved by mapping words into a meaningful space where the distance between words is related to semantic similarity. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space. It is developed as an open-source project at Stanford and was launched in 2014. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods.
Document 4:::
Spatial embedding
Spatial embedding is one of feature learning techniques used in spatial analysis where points, lines, polygons or other spatial data types. representing geographic locations are mapped to vectors of real numbers. Conceptually it involves a mathematical embedding from a space with many dimensions per geographic object to a continuous vector space with a much lower dimension. Such embedding methods allow complex spatial data to be used in neural networks and have been shown to improve performance in spatial analysis tasks
Document 5:::
Strong and weak sampling
Strong and weak sampling are two sampling approach in Statistics, and are popular in computational cognitive science and language learning. In strong sampling, it is assumed that the data are intentionally generated as positive examples of a concept, while in weak sampling, it is assumed that the data are generated without any restrictions. |
epfl-collab | Which of the following tasks would typically not be solved by clustering? | ['Spam detection in an email system', 'Detection of latent topics in a document collection', 'Discretization of continuous features', 'Community detection in social networks'] | A | null | Document 1:::
Data Clustering
It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem.
Document 2:::
Data Clustering
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Cluster analysis itself is not one specific algorithm, but the general task to be solved.
Document 3:::
Cluster analysis
It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem.
Document 4:::
Cluster analysis
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Cluster analysis itself is not one specific algorithm, but the general task to be solved.
Document 5:::
Cluster analysis
The appropriate clustering algorithm and parameter settings (including parameters such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It is often necessary to modify data preprocessing and model parameters until the result achieves the desired properties. |
epfl-collab | In general, what is true regarding Fagin's algorithm? | ['Posting files need to be indexed by the TF-IDF weights', 'It performs a complete scan over the posting files', 'It provably returns the k documents with the largest aggregate scores', 'It never reads more than (kn)½ entries from a posting list'] | C | null | Document 1:::
Fagin's theorem
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
Document 2:::
Ford-Fulkerson algorithm
The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times. It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson.
Document 3:::
Fibonacci search technique
In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation).
Document 4:::
Ford-Fulkerson algorithm
The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path.
Document 5:::
Faugère's F4 and F5 algorithms
This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. |
epfl-collab | Which of the following statements is correct in the context of information extraction? | ['The bootstrapping technique requires a dataset where statements are labelled', 'A confidence measure that prunes too permissive patterns discovered with bootstrapping can help reducing semantic drift', 'For supervised learning, sentences in which NER has detected no entities are used as negative samples', 'Distant supervision typically uses low-complexity features only, due to the lack of training data'] | B | null | Document 1:::
Knowledge discovery
Knowledge extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL (data warehouse), the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data. The RDB2RDF W3C group is currently standardizing a language for extraction of resource description frameworks (RDF) from relational databases. Another popular example for knowledge extraction is the transformation of Wikipedia into structured data and also the mapping to existing knowledge (see DBpedia and Freebase).
Document 2:::
Keyword extraction
Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document.Key phrases, key terms, key segments or just keywords are the terminology which is used for defining the terms that represent the most relevant information contained in the document. Although the terminology is different, function is the same: characterization of the topic discussed in a document. The task of keyword extraction is an important problem in text mining, information extraction, information retrieval and natural language processing (NLP).
Document 3:::
Implication (information science)
A formal context is a triple (G,M,I), where G and M are sets (of objects and attributes, respectively), and where I⊆G×M is a relation expressing which objects have which attributes. An implication that holds in such a formal context is called a valid implication for short. That an implication is valid can be expressed by the derivation operators: A→B holds in (G,M,I) iff A′ ⊆ B′ or, equivalently, iff B⊆A".
Document 4:::
Implication (information science)
A formal context is a triple (G,M,I), where G and M are sets (of objects and attributes, respectively), and where I⊆G×M is a relation expressing which objects have which attributes. An implication that holds in such a formal context is called a valid implication for short. That an implication is valid can be expressed by the derivation operators: A→B holds in (G,M,I) iff A′ ⊆ B′ or, equivalently, iff B⊆A".
Document 5:::
Morphological parsing
Morphological parsing, in natural language processing, is the process of determining the morphemes from which a given word is constructed. It must be able to distinguish between orthographic rules and morphological rules. For example, the word 'foxes' can be decomposed into 'fox' (the stem), and 'es' (a suffix indicating plurality). The generally accepted approach to morphological parsing is through the use of a finite state transducer (FST), which inputs words and outputs their stem and modifiers. |
epfl-collab | Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is correct? | ['LSI does take into account the frequency of words in the documents, whereas WE does not', 'The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot', 'LSI does not take into account the order of words in the document, whereas WE does', 'LSI is deterministic (given the dimension), whereas WE is not'] | D | null | Document 1:::
Semantic analysis (machine learning)
A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Document 2:::
Semantic analysis (machine learning)
If language is grounded, it is equal to recognizing a machine readable meaning. For the restricted domain of spatial analysis, a computer based language understanding system was demonstrated. : 123 Latent semantic analysis (sometimes latent semantic indexing), is a class of techniques where documents are represented as vectors in term space.
Document 3:::
Random indexing
Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately. This is the original point of the random projection approach to dimension reduction first formulated as the Johnson–Lindenstrauss lemma, and locality-sensitive hashing has some of the same starting points. Random indexing, as used in representation of language, originates from the work of Pentti Kanerva on sparse distributed memory, and can be described as an incremental formulation of a random projection.It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces.
Document 4:::
Natural Language Semantics
Natural Language Semantics is a quarterly peer-reviewed academic journal of semantics published by Springer Science+Business Media. It covers semantics and its interfaces in grammar, especially in syntax. The founding editors-in-chief were Irene Heim (MIT) and Angelika Kratzer (University of Massachusetts Amherst). The current editor-in-chief is Amy Rose Deal (University of California, Berkeley).
Document 5:::
Semantic networks
A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples. Semantic networks are used in natural language processing applications such as semantic parsing and word-sense disambiguation. Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., of social media posts), to reveal biases (e.g., in news coverage), or even to map an entire research field. |
epfl-collab | In vector space retrieval each row of the matrix M corresponds to | ['A document', 'A concept', 'A query', 'A term'] | D | null | Document 1:::
Matrix representation
Matrix representation is a method used by a computer language to store matrices of more than one dimension in memory. Fortran and C use different schemes for their native arrays. Fortran uses "Column Major", in which all the elements for a given column are stored contiguously in memory. C uses "Row Major", which stores all the elements for a given row contiguously in memory.
Document 2:::
Sparse Distributed Memory
The SDM works with n-dimensional vectors with binary components. Depending on the context, the vectors are called points, patterns, addresses, words, memory items, data, or events. This section is mostly about the properties of the vector space N = { 0 , 1 } n {\displaystyle \{0,1\}^{n}} . Let n be number of dimensions of the space.
Document 3:::
Sparse distributed memory
The SDM works with n-dimensional vectors with binary components. Depending on the context, the vectors are called points, patterns, addresses, words, memory items, data, or events. This section is mostly about the properties of the vector space N = { 0 , 1 } n {\displaystyle \{0,1\}^{n}} . Let n be number of dimensions of the space.
Document 4:::
Retrieval Data Structure
In computer science, a retrieval data structure, also known as static function, is a space-efficient dictionary-like data type composed of a collection of (key, value) pairs that allows the following operations: Construction from a collection of (key, value) pairs Retrieve the value associated with the given key or anything if the key is not contained in the collection Update the value associated with a key (optional)They can also be thought of as a function b: U → { 0 , 1 } r {\displaystyle b\colon \,{\mathcal {U}}\to \{0,1\}^{r}} for a universe U {\displaystyle {\mathcal {U}}} and the set of keys S ⊆ U {\displaystyle S\subseteq {\mathcal {U}}} where retrieve has to return b ( x ) {\displaystyle b(x)} for any value x ∈ S {\displaystyle x\in S} and an arbitrary value from { 0 , 1 } r {\displaystyle \{0,1\}^{r}} otherwise. In contrast to static functions, AMQ-filters support (probabilistic) membership queries and dictionaries additionally allow operations like listing keys or looking up the value associated with a key and returning some other symbol if the key is not contained. As can be derived from the operations, this data structure does not need to store the keys at all and may actually use less space than would be needed for a simple list of the key value pairs.
Document 5:::
Square matrices
Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if R {\displaystyle R} is a square matrix representing a rotation (rotation matrix) and v {\displaystyle \mathbf {v} } is a column vector describing the position of a point in space, the product R v {\displaystyle R\mathbf {v} } yields another column vector describing the position of that point after that rotation. If v {\displaystyle \mathbf {v} } is a row vector, the same transformation can be obtained using v R T {\displaystyle \mathbf {v} R^{\mathsf {T}}} , where R T {\displaystyle R^{\mathsf {T}}} is the transpose of R {\displaystyle R} . |
epfl-collab | Which of the following is correct regarding prediction models? | ['Training error being less than test error means overfitting', 'Training error being less than test error means underfitting', 'Simple models have lower bias than complex models', 'Complex models tend to overfit, unless we feed them with more data'] | D | null | Document 1:::
Interval predictor model
In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.
Document 2:::
Best linear unbiased prediction
The distinction arises because it is conventional to talk not about estimating fixed effects but rather about predicting random effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson page 28)).
Document 3:::
Scientific prediction
A prediction (Latin præ-, "before," and dicere, "to say"), or forecast, is a statement about a future event or data. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact difference from "estimation"; different authors and disciplines ascribe different connotations. Future events are necessarily uncertain, so guaranteed accurate information about the future is impossible. Prediction can be useful to assist in making plans about possible developments.
Document 4:::
Linear model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible.
Document 5:::
Interval predictor model
This practice enables the analyst to adjust the desired level of conservatism in the prediction. As a consequence of the theory of scenario optimization, in many cases rigorous predictions can be made regarding the performance of the model at test time. Hence an interval predictor model can be seen as a guaranteed bound on quantile regression. Interval predictor models can also be seen as a way to prescribe the support of random predictor models, of which a Gaussian process is a specific case . |
epfl-collab | Applying SVD to a term-document matrix M. Each concept is represented in K | ['as a least squares approximation of the matrix M', 'as a linear combination of terms of the vocabulary', 'as a singular value', 'as a linear combination of documents in the document collection'] | B | null | Document 1:::
Two-dimensional singular-value decomposition
Two-dimensional singular-value decomposition (2DSVD) computes the low-rank approximation of a set of matrices such as 2D images or weather maps in a manner almost identical to SVD (singular-value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).
Document 2:::
Sparse Distributed Memory
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center.This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts ( 2 1000 {\displaystyle 2^{1000}} bits) of information without focusing on the accuracy but on similarity of information. There are some recent applications in robot navigation and experience-based robot manipulation.
Document 3:::
Sparse distributed memory
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center.This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts ( 2 1000 {\displaystyle 2^{1000}} bits) of information without focusing on the accuracy but on similarity of information. There are some recent applications in robot navigation and experience-based robot manipulation.
Document 4:::
Bag-of-words model in computer vision
In computer vision, the bag-of-words model (BoW model) sometimes called bag-of-visual-words model can be applied to image classification or retrieval, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.
Document 5:::
Random indexing
Random indexing is a dimensionality reduction method and computational framework for distributional semantics, based on the insight that very-high-dimensional vector space model implementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately. This is the original point of the random projection approach to dimension reduction first formulated as the Johnson–Lindenstrauss lemma, and locality-sensitive hashing has some of the same starting points. Random indexing, as used in representation of language, originates from the work of Pentti Kanerva on sparse distributed memory, and can be described as an incremental formulation of a random projection.It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces. |
epfl-collab | An HMM model would not be an appropriate approach to identify | ['Word n-grams', 'Named Entities', 'Part-of-Speech tags', 'Concepts'] | A | null | Document 1:::
Maximum-entropy Markov model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction.
Document 2:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 3:::
Sequence labeling
Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Document 4:::
Baum–Welch algorithm
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step.
Document 5:::
Viterbi Algorithm
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especially in the context of Markov information sources and hidden Markov models (HMM). The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, diarization, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal. |
epfl-collab | Which of the following is NOT an (instance-level) ontology? | ['WikiData', 'Wordnet', 'Google Knowledge Graph', 'Schema.org'] | D | null | Document 1:::
Class (knowledge representation)
In knowledge representation, a class is a collection of individuals or individuals objects. A class can be defined either by extension (specifying members), or by intension (specifying conditions), using what is called in some ontology languages like OWL. According to the Type–token distinction, the ontology is divided into individuals, who are real worlds objects, or events, and types, or classes, who are sets of real world objects. Class expressions or definitions gives the properties that the individuals must fulfill to be members of the class. Individuals that fulfill the property are called Instances.
Document 2:::
Class (knowledge representation)
The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both.
Document 3:::
Plant ontology
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
Document 4:::
Cell ontology
The Cell Ontology is an ontology that aims at capturing the diversity of cell types in animals. It is part of the Open Biomedical and Biological Ontologies (OBO) Foundry. The Cell Ontology identifiers and organizational structure are used to annotate data at the level of cell types, for example in single-cell RNA-seq studies. It is one important resource in the construction of the Human Cell Atlas. The Cell Ontology was first described in an academic article in 2005.
Document 5:::
Class (knowledge representation)
The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional. |
epfl-collab | When using linear regression, which techniques improve your result? (One or multiple answers) | ['polynomial combination of features', 'linear regression does not allow polynomial features', 'because the linear nature needs to be preserved, non-linear combination of features are not allowed', 'adding new features that are non-linear combination of existing features'] | A | null | Document 1:::
Linear Regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.
Document 2:::
Linear Regression
This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine. Linear regression has many practical uses. Most applications fall into one of the following two broad categories: If the goal is error reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables.
Document 3:::
Single-equation methods (econometrics)
A variety of methods are used in econometrics to estimate models consisting of a single equation. The oldest and still the most commonly used is the ordinary least squares method used to estimate linear regressions. A variety of methods are available to estimate non-linear models.
Document 4:::
Regression prediction
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features'). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line (or hyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (see linear regression), this allows the researcher to estimate the conditional expectation (or population average value) of the dependent variable when the independent variables take on a given set of values.
Document 5:::
Linear Regression
After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response. If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response.Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous. |
epfl-collab | What is our final goal in machine learning? (One answer) | [' Overfit ', ' Generalize ', ' Megafit ', ' Underfit'] | B | null | Document 1:::
Validation set
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets.
Document 2:::
Machine learning
Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning.ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods.
Document 3:::
Trainable parameter
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets.
Document 4:::
Linear classification
In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.
Document 5:::
Learning algorithms
Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning.ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods. |
epfl-collab | For binary classification, which of the following methods can achieve perfect training accuracy on \textbf{all} linearly separable datasets? | ['Hard-margin SVM', 'None of the suggested', 'Decision tree', '15-nearest neighbors'] | C | null | Document 1:::
Binary classifier
Binary classification is the task of classifying the elements of a set into two groups (each called class) on the basis of a classification rule. Typical binary classification problems include: Medical testing to determine if a patient has certain disease or not; Quality control in industry, deciding whether a specification has been met; In information retrieval, deciding whether a page should be in the result set of a search or not.Binary classification is dichotomization applied to a practical situation. In many practical binary classification problems, the two groups are not symmetric, and rather than overall accuracy, the relative proportion of different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative).
Document 2:::
Binary classification
Binary classification is the task of classifying the elements of a set into two groups (each called class) on the basis of a classification rule. Typical binary classification problems include: Medical testing to determine if a patient has certain disease or not; Quality control in industry, deciding whether a specification has been met; In information retrieval, deciding whether a page should be in the result set of a search or not.Binary classification is dichotomization applied to a practical situation. In many practical binary classification problems, the two groups are not symmetric, and rather than overall accuracy, the relative proportion of different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative).
Document 3:::
Naive Bayes classifier
In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels.Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression,: 718 which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method.
Document 4:::
Multiclass classifier
In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). While many classification algorithms (notably multinomial logistic regression) naturally permit the use of more than two classes, some are by nature binary algorithms; these can, however, be turned into multinomial classifiers by a variety of strategies. Multiclass classification should not be confused with multi-label classification, where multiple labels are to be predicted for each instance.
Document 5:::
Linear classification
A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when x → {\displaystyle {\vec {x}}} is sparse. Also, linear classifiers often work very well when the number of dimensions in x → {\displaystyle {\vec {x}}} is large, as in document classification, where each element in x → {\displaystyle {\vec {x}}} is typically the number of occurrences of a word in a document (see document-term matrix). In such cases, the classifier should be well-regularized. |
epfl-collab | A model predicts $\mathbf{\hat{y}} = [1, 0, 1, 1, 1]$. The ground truths are $\mathbf{y} = [1, 0, 0, 1, 1]$.
What is the accuracy? | ['0.5', '0.875', '0.75', '0.8'] | D | null | Document 1:::
High-dimensional model representation
High-dimensional model representation is a finite expansion for a given multivariable function. The expansion was first described by Ilya M. Sobol as f ( x ) = f 0 + ∑ i = 1 n f i ( x i ) + ∑ i , j = 1 i < j n f i j ( x i , x j ) + ⋯ + f 12 … n ( x 1 , … , x n ) . {\displaystyle f(\mathbf {x} )=f_{0}+\sum _{i=1}^{n}f_{i}(x_{i})+\sum _{i,j=1 \atop i
Document 2:::
Mean square deviation
If a vector of n {\displaystyle n} predictions is generated from a sample of n {\displaystyle n} data points on all variables, and Y {\displaystyle Y} is the vector of observed values of the variable being predicted, with Y ^ {\displaystyle {\hat {Y}}} being the predicted values (e.g. as from a least-squares fit), then the within-sample MSE of the predictor is computed as MSE = 1 n ∑ i = 1 n ( Y i − Y i ^ ) 2 . {\displaystyle \operatorname {MSE} ={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\hat {Y_{i}}}\right)^{2}.} In other words, the MSE is the mean ( 1 n ∑ i = 1 n ) {\textstyle \left({\frac {1}{n}}\sum _{i=1}^{n}\right)} of the squares of the errors ( Y i − Y i ^ ) 2 {\textstyle \left(Y_{i}-{\hat {Y_{i}}}\right)^{2}} . This is an easily computable quantity for a particular sample (and hence is sample-dependent).
Document 3:::
Coefficient of determination
A data set has n values marked y1,...,yn (collectively known as yi or as a vector y = T), each associated with a fitted (or modeled, or predicted) value f1,...,fn (known as fi, or sometimes ŷi, as a vector f). Define the residuals as ei = yi − fi (forming a vector e). If y ¯ {\displaystyle {\bar {y}}} is the mean of the observed data: then the variability of the data set can be measured with two sums of squares formulas: The sum of squares of residuals, also called the residual sum of squares: The total sum of squares (proportional to the variance of the data): The most general definition of the coefficient of determination is In the best case, the modeled values exactly match the observed values, which results in S S res = 0 {\displaystyle SS_{\text{res}}=0} and R 2 = 1 {\displaystyle R^{2}=1} . A baseline model, which always predicts y ¯ {\displaystyle {\bar {y}}} , will have R 2 = 0 {\displaystyle R^{2}=0} . Models that have worse predictions than this baseline will have a negative R 2 {\displaystyle R^{2}} .
Document 4:::
Loss functions for classification
Some of these surrogates are described below. In practice, the probability distribution p ( x → , y ) {\displaystyle p({\vec {x}},y)} is unknown. Consequently, utilizing a training set of n {\displaystyle n} independently and identically distributed sample points S = { ( x → 1 , y 1 ) , … , ( x → n , y n ) } {\displaystyle S=\{({\vec {x}}_{1},y_{1}),\dots ,({\vec {x}}_{n},y_{n})\}} drawn from the data sample space, one seeks to minimize empirical risk I S = 1 n ∑ i = 1 n V ( f ( x → i ) , y i ) {\displaystyle I_{S}={\frac {1}{n}}\sum _{i=1}^{n}V(f({\vec {x}}_{i}),y_{i})} as a proxy for expected risk. (See statistical learning theory for a more detailed description.)
Document 5:::
Underfitting
In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure. |
epfl-collab | K-Means: | ['always converges to the same solution, no matter the initialization', "doesn't always converge", 'always converges, but not always to the same solution', 'can never converge'] | C | null | Document 1:::
K-means algorithm
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances.
Document 2:::
Determining the number of clusters in a data set
Determining the number of clusters in a data set, a quantity often labelled k as in the k-means algorithm, is a frequent problem in data clustering, and is a distinct issue from the process of actually solving the clustering problem. For a certain class of clustering algorithms (in particular k-means, k-medoids and expectation–maximization algorithm), there is a parameter commonly referred to as k that specifies the number of clusters to detect. Other algorithms such as DBSCAN and OPTICS algorithm do not require the specification of this parameter; hierarchical clustering avoids the problem altogether. The correct choice of k is often ambiguous, with interpretations depending on the shape and scale of the distribution of points in a data set and the desired clustering resolution of the user.
Document 3:::
K-median problem
In statistics, k-medians clustering is a cluster analysis algorithm. It is a variation of k-means clustering where instead of calculating the mean for each cluster to determine its centroid, one instead calculates the median. This has the effect of minimizing error over all clusters with respect to the 1-norm distance metric, as opposed to the squared 2-norm distance metric (which k-means does.) This relates directly to the k-median problem with respect to the 1-norm, which is the problem of finding k centers such that the clusters formed by them are the most compact.
Document 4:::
K-means algorithm
For instance, better Euclidean solutions can be found using k-medians and k-medoids. The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both k-means and Gaussian mixture modeling.
Document 5:::
Fuzzy clustering
Fuzzy clustering (also referred to as soft clustering or soft k-means) is a form of clustering in which each data point can belong to more than one cluster. Clustering or cluster analysis involves assigning data points to clusters such that items in the same cluster are as similar as possible, while items belonging to different clusters are as dissimilar as possible. Clusters are identified via similarity measures. These similarity measures include distance, connectivity, and intensity. Different similarity measures may be chosen based on the data or the application. |
epfl-collab | What is the algorithm to perform optimization with gradient descent? Actions between Start loop and End loop are performed multiple times. (One answer) | ['1 Initialize weights, 2 Start loop, 3 Update weights, 4 End loop, 5 Compute gradients ', '1 Initialize weights, 2 Start loop, 3 Compute gradients, 4 Update weights, 5 End Loop', '1 Start loop, 2 Initialize weights, 3 Compute gradients, 4 Update weights, 5 End loop', '1 Initialize weights, 2 Compute gradients, 3 Start loop, 4 Update weights, 5 End loop'] | B | null | Document 1:::
Gradient descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent. It is particularly useful in machine learning for minimizing the cost or loss function.
Document 2:::
Gradient method
In optimization, a gradient method is an algorithm to solve problems of the form min x ∈ R n f ( x ) {\displaystyle \min _{x\in \mathbb {R} ^{n}}\;f(x)} with the search directions defined by the gradient of the function at the current point. Examples of gradient methods are the gradient descent and the conjugate gradient.
Document 3:::
Gradient descent
Gradient descent should not be confused with local search algorithms, although both are iterative methods for optimization. Gradient descent is generally attributed to Augustin-Louis Cauchy, who first suggested it in 1847. Jacques Hadamard independently proposed a similar method in 1907. Its convergence properties for non-linear optimization problems were first studied by Haskell Curry in 1944, with the method becoming increasingly well-studied and used in the following decades.A simple extension of gradient descent, stochastic gradient descent, serves as the most basic algorithm used for training most deep networks today.
Document 4:::
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.While the basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s, stochastic gradient descent has become an important optimization method in machine learning.
Document 5:::
Coordinate descent
Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. A line search along the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.