Dataset Viewer
Auto-converted to Parquet
id
stringlengths
1
4
question
stringlengths
6
1.87k
context
sequencelengths
5
5
choices
sequencelengths
2
18
answer
stringlengths
1
840
5
Which of the following scheduler policies are preemptive?
[ "Fixed-priority preemptive scheduling is a scheduling system commonly used in real-time systems. With fixed priority preemptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of all those tasks that are currently ready to execute. The preemptive scheduler has a clock interrupt task that can provide the scheduler with options to switch after the task has had a given period to execute—the time slice. This scheduling system has the advantage of making sure no task hogs the processor for any time longer than the time slice. However, this scheduling scheme is vulnerable to process or thread lockout: since priority is given to higher-priority tasks, the lower-priority tasks could wait an indefinite amount of time. One common method of arbitrating this situation is aging, which gradually increments the priority of waiting processes and threads, ensuring that they will all eventually execute. Most real-time operating systems (RTOSs) have preemptive schedulers. Also turning off time slicing effectively gives you the non-preemptive RTOS. Preemptive scheduling is often differentiated with cooperative scheduling, in which a task can run continuously from start to end without being preempted by other tasks. To have a task switch, the task must explicitly call the scheduler. Cooperative scheduling is used in a few RTOS such as Salvo or TinyOS.", "Run-to-completion scheduling or nonpreemptive scheduling is a scheduling model in which each task runs until it either finishes, or explicitly yields control back to the scheduler. Run-to-completion systems typically have an event queue which is serviced either in strict order of admission by an event loop, or by an admission scheduler which is capable of scheduling events out of order, based on other constraints such as deadlines. Some preemptive multitasking scheduling systems behave as run-to-completion schedulers in regard to scheduling tasks at one particular process priority level, at the same time as those processes still preempt other lower priority tasks and are themselves preempted by higher priority tasks. See also Preemptive multitasking Cooperative multitasking", "jobs to be interrupted (paused and resumed later) 39 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily 40 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily • Preemptive schedulers can take the control of CPU at any time, switching to another process according to the the scheduling policy • OS relies on timer interrupts and context switch for preemptive process/jobs 41 Shortest time to completion first (STCF) • STCF extends the SJF by adding preemption • Any time a new job enters the system: a. STCF scheduler determines which of the remaining jobs (including new job) has the least time left b. STCF then schedules the shortest job first 42 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C) = 10 • Tturnaround(A) = 120 • Tturnaround(B) = (20 - 10) = 10 • Tturnaround(C) = (30 - 10) = 20 Average turnaround time = (120 + 10 + 20) / 3 = 50 0 20 40 60 80 100 120 A B C [B, C arrive] A 43 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C", "ed to Q2 (R 5) • The same procedure happens until 100 ms • Process B and C also join Q2 • A scheduled for 10 ms • B is scheduled and then followed by C that are issuing IO requests as well MLFQ does not starve long running jobs and gives equal time to all jobs A B C boost boost boost Putting it together: the “uptime” utility - combines CPU and IO contention 70 71 Summary • Context switching and preemption are fundamental mechanisms that allow the OS to remain in control and to implement higher level scheduling policies • Schedulers need to optimize for different metrics: utilization, turnaround time, response time, fairness • FIFO: Simple, non-preemptive scheduler • SJF: non-preemptive, prevents process jams • STCF: preemptive, prevents jamming of late processes • RR: preemptive, great response time, bad turnaround time • MLFQ: preemptive, most realistic Insight: Past behavior is good predictor for future behavior", "the implementation of the higher-level scheduler. A compromise has to be made involving the following variables: Response time: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessly long. If this variable is not considered resource starvation may occur and a process may not complete at all. Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process. Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster. References Tanenbaum, Albert Woodhull, Operating Systems: Design and Implementation, p.92" ]
[ "FIFO (First In, First Out)", "SJF (Shortest Job First)", "STCF (Shortest Time to Completion First)", "RR (Round Robin)" ]
['STCF (Shortest Time to Completion First)', 'RR (Round Robin)']
6
Which of the following are correct implementation for acquire function ? Assume 0 means UNLOCKED and 1 means LOCKED. Initially l->locked = 0.
[ "is in the lexical scope of the anonymous function. Higher-order funcMons • Functions that operate on other functions, either by taking them as arguments or by returning them, are called higher-order functions. function myFunc() { const anotherFunc = function() { console.log(\"inner\"); } return anotherFunc; } const innerFunc = myFunc(); innerFunc(); // \"inner\" myFunc()(); // \"inner\" function forEach(array, callback) {... } Closures • A closure is the combination of a function and the lexical environment within which that function was declared. • The function defined in the closure ‘remembers’ the environment in which it was created. function greaterThan(n) { return function(m) { return m > n; }; } const greaterThan10 = greaterThan(10); greaterThan10(11); // true let counter = (function() { let privateCounter = 0; function changeBy(val) { privateCounter += val; } return { increment: function() { changeBy(1); }, decrement: function() { changeBy(-1); }, value: function() { return privateCounter; } }; })(); console.log(counter.value()); // logs 0 counter.increment(); counter.increment(); console.log(counter.value()); // logs 2 counter.decrement(); console.log(counter.value()); // logs 1 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures [ViralPatel.net] Arrow funcMons • An arrow function expression has a shorter syntax than a function expression let counter = (function() { let privateCounter = 0; changeBy = (val) => { return privateCounter += val; } return { increment: () => changeBy(1), // one liner can remove return and {} decrement: () => changeBy(-1), value: () => privateCounter, reset: (val=0) => { privateCounter = val; }, }", "1[A, B]: def apply(x: A): B So functions are objects with apply methods. There are also traits Function2, Function3,... for functions which take more parameters. Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: new Function1[Int, Int]: def apply(x: Int) = x * x Expansion of Function Values An anonymous function such as (x: Int) => x * x is expanded to: new Function1[Int, Int]: def apply(x: Int) = x * x This anonymous class can itself be thought of as a block that defines and instantiates a local class: { class $anonfun() extends Function1[Int, Int]: def apply(x: Int) = x * x $anonfun() } Expansion of Function Calls A function call, such as f(a, b), where f is a value of some class type, is expanded to f.apply(a, b) So the OO-translation of val f = (x: Int) => x * x f(7) would be val f = new Function1[Int, Int]: def apply(x: Int) = x * x f.apply(7) Functions and Methods Note that a method such as def f(x: Int): Boolean =. is not itself a function value. But if f is used in a place where a Function type is expected, it is converted automatically to the function value (x: Int) => f(x) or, expanded: new Function1[Int, Boolean]: def apply(x: Int) = f(x) Exercise In package week3, define an object IntSet:. with 3 functions in it so that users can create IntSets of lengths 0-2 using syntax IntSet() // the empty set IntSet(1) // the set with single", "q ~f(L)\\cap f(R)~} is strict: ∅ = f ( ∅ ) = f ( L ∩ R ) ≠ f ( L ) ∩ f ( R ) = { y } ∩ { y } = { y } {\\displaystyle \\varnothing ~=~f(\\varnothing )~=~f(L\\cap R)~\\neq ~f(L)\\cap f(R)~=~\\{y\\}\\cap \\{y\\}~=~\\{y\\}} In words: functions might not distribute over set intersection ∩ {\\displaystyle \\,\\cap \\,} (which can be defined as the set subtraction of two sets: L ∩ R = L <unk> ( L <unk> R ) {\\displaystyle L\\cap R=L\\setminus (L\\setminus R)} ). What the set operations in these four examples have in common is that they either are set subtraction <unk> {\\displaystyle \\setminus } (examples (1) and (2)) or else they can naturally be defined as the set subtraction of two sets (examples (3) and (4)). Mnemonic: In fact, for each of the above four set formulas for which equality is not guaranteed, the direction of the containment (that is, whether to use ⊆ or <unk> {\\displaystyle \\,\\subseteq {\\text{ or }}\\supseteq \\,} ) can always be deduced by imagining the function f {\\displaystyle f} as being constant and the two sets ( L {\\displaystyle L} and R {\\displaystyle R} ) as being non-empty disjoint subsets of its domain. This is because every equality fails for such a function and sets: one side will be always be ∅ {\\displaystyle \\varnothing } and the other non-empty − from this fact, the correct choice of ⊆ or <unk> {\\displaystyle \\,\\subseteq {\\text{ or }}\\supseteq \\,} can be deduced by answering: \"which side is empty?\" For example, to decide if the", "(LL) = l.l rx, mem[addr] o Store-conditional (SC) = s.c rx, mem[addr] u Interacts with cache-coherence protocol to guarantee no intervening writes to [addr] u Used in MIPS, DEC Alpha, and all ARM cores Alternative: Load-Locked & Store Conditional CS 307 – Fall 2018 Lec.07 - Slide 58 u Recall the incorrect first attempt: o Two cores could both see the lock as free, and enter the critical section u How does LL/SC solve the problem? How is LL/SC Atomic? Lock: Unlock: ld r1, mem[addr] // load word into r1 cmp r1, #0 // if 0, store 1 bnz Lock // else, try again st mem[addr], #1 st mem[addr], #0 // store 0 to address CS 307 – Fall 2018 Lec.07 - Slide 59 u LL puts the address and flag into a link register Remember and Validate the Address P0 Cache ll X BusRd X Link Register CS 307 – Fall 2018 Lec.07 - Slide 60 u LL puts the address and flag into a link register o Invalidations or evictions for that address clear the flag, and the SC will then fail o Signals that another core modified the address Remember and Validate the Address P0 Cache BusInv 0 Link Register CS 307 – Fall 2018 Lec.07 - Slide 61 u Consider the following case: o Processors 0 and 1 both execute the following code, with cache block X beginning in Shared o Both ll [X] read 0 o Both begin to issue sc [X] o Will we break mutual exclusion? Simultaneous SCs? Lock: ll r2, [X] cmp r2, #0 bnz Lock // if 1, spin addi r2, #1 sc [X], r2 CS 307 – Fall 2018 Lec.07 - Slide 62 u Will we break mutual exclusion? o Answer: No! Why? § Remember, cache coherence ensures the propagation of values to a single address. o So, when both processors try to BusInv, one of them will “win”, and clear the other’s link register flag § e.g., Say P1 wins", "t) du système d'acquisition du signal (filtres), conduisant généralement à du bruit de convolution. •= Bande passante fréquentielle limitée (par exemple dans le cas des lignes téléphoniques pour lesquelles les fréquences transmises sont naturellement limitées entre environ 350Hz et 3200Hz). •= Elocution inhabituelle ou altérée, comprenant entre autre: l'effet Lombard, (qui désigne toutes les modifications, souvent inaudibles, du signal acoustique lors de l'élocution en milieu bruité), le stress physique ou émotionnel, une vitesse d'élocution inhabituelle, ainsi que les bruits de lèvres ou de respiration. Certains systèmes peuvent être plus robustes que d'autres à l'une ou l'autre de ces perturbations, mais en règle générale, les reconnaisseurs de parole actuels restent encore trop sensibles à ces paramètres. 4.3 Principes généraux Le problème de la reconnaissance automatique de la parole consiste à extraire l'information contenue dans un signal de parole (signal électrique obtenu à la sortie d'un microphone et typiquement échantillonné à 8kHz dans le cas de lignes téléphoniques ou entre 10 et 16kHz dans le cas de saisie par microphone). Bien que ceci soulève également le problème de la compréhension de la parole, nous nous contenterons ici de discuter du problème de la reconnaissance des mots contenus dans une phrases. 4.3.1 Reconnaissance par comparaison à des exemples Les premiers succès en reconnaissance vocale ont été obtenus dans les années 70 à l’aide d’un paradigme de reconnaissance de mots « par l’exemple ». L’idée, très simple dans son principe, consiste à faire prononcer un ou plusieurs exemples de chacun des mots susceptibles d’être reconnus, et à les enregistrer sous forme de vecteurs acoustiques (typiquement : un vecteur de coefficients LPC ou assimilés toutes les 10 ms). Puisque cette suite de vecteurs acoustiques caractérisent complètement l’évolution de l’enveloppe spectrale du signal enregistré, on peut dire qu’elle correspond à un l’enregistrement d’un spectrogramme. L’étape de reconnaissance proprement dite consiste alors à analyser le signal inconnu sous la" ]
[ "c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n if(cas(&l->locked, 0, 1) == 0)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n for(;;)\n if(cas(&l->locked, 1, 0) == 1)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n if(l->locked == 0) \n return;\n }" ]
['c \n void acquire(struct lock *l)\n {\n for(;;)\n if(xchg(&l->locked, 1) == 0)\n return;\n }']
11
In which of the following cases does JOS acquire the big kernel lock?
[ "@1ntext Persa Now, suppose Persa is a packet switch that forwards Alice’s traffic to Bob. Suppose Persa *modifies* the {plaintext, ciphertext} pair sent by Alice. But because Persa does not know *Alice’s private key*, she cannot produce a “consistent” pair: when Bob decrypts [click] Persa’s ciphertext he will not get Alice’s plaintext, and he will not get Persa’s plaintext; he will get something that doesn’t make any sense. So, Bob will know that it is not Alice who sent this {plaintext, ciphertext} pair. As before: There is something silly about this approach: Alice sends twice the amount of data to Bob (relative to what she wants to actually say to him). Because the ciphertext is as large as the plaintext. → Is there a better way to achieve authenticity and data integrity? That does not require to send that much extra data to Bob? Bob 75 hash function Alice-key- Alice-key+ Alice plaintext hash ciphertext plaintext plaintext encryption algorithm ciphertext decryption algorithm hash hash hash function hash plaintext Alice and Bob can combine encryption/decryption with a cryptographic hash function*: - Alice provides her plaintext as input [click] to a hash function and obtains a hash [click]. - She provides the hash as input to an encryption algorithm (together with her private key) and obtains a ciphertext [click]. - She sends to Bob both the plaintext and the ciphertext [click]. - Bob provides the ciphertext as input to a decryption algorithm (together with *Alice’s public key*), and obtains the hash [click]. - Then Bob provides the plaintext that Alice sent as input [click] to his hash function, and obtains the same hash [click]. Bob knows that it is Alice who sent the {plaintext, ciphertext} pair, because only someone who knows Alice’s private key can produce a pair where the plaintext and the ciphertext yield the same hash. We call this ciphertext a... Bob 58 hash function Alice-key- Alice-key+ Alice", "that it is not Alice who sent this {plaintext, ciphertext} pair. However: There is something silly about this approach: Alice sends twice the amount of data to Bob (relative to what she wants to actually say to him). Because the ciphertext is as large as the plaintext. → Is there a better way to achieve authenticity and data integrity? That does not require to send that much extra data to Bob? Bob 68 hash function hash function key key Alice plaintext hash << ciphertext hash hash plaintext plaintext Instead of encryption/decryption algorithms, Alice and Bob can use a *cryptographic hash function*. - Alice provides her plaintext as input [click] to her hash function (together with the shared secret key), and she obtains a *hash* [click] of her plaintext. By definition, a hash is smaller (typically significantly more) than the input to the hash function. So, Alice obtains a hash that is (typically significantly) smaller than her plaintext. - Alice sends *both* the plaintext and the hash to Bob [click]. - Bob provides the plaintext as input to his hash function (together with the shared secret key), and obtains a hash. Bob knows that it is Alice who sent the {plaintext, MAC} pair, because only someone who knows the shared secret key can produce a pair where the plaintext yields the MAC if hashed with this particular key. We call this hash... Bob 53 hash function hash function key key Alice plaintext MAC MAC plaintext MAC plaintext Message Authentication Code or MAC. Bob 54 hash function hash function key key Alice plaintext |Ç#@ M@C Persa plaintext pla1nt3xt MAC Now, suppose Persa is a packet switch that forwards Alice’s traffic to Bob. Suppose Persa *modifies* the {plaintext, MAC} pair sent by Alice. But because Persa does not know the shared secret key, she cannot produce a “consistent” pair: when Bob hashes [click] Persa’s plaintext he will not get Alice’s MAC, and he will not get Persa’s MAC; he will get something that doesn’t make any sense. So, Bob will know that it", "the film faced delays due to rewrites and the COVID-19 pandemic. Spielberg was initially set to direct but stepped down in 2020, with Mangold taking over. Filming began in June 2021 in various locations including the United Kingdom, Italy, and Morocco, wrapping in February 2022. Franchise composer John Williams returned to score the film, earning nominations for Best Original Score at the 96th Academy Awards and Best Score Soundtrack for Visual Media at the 66th Annual Grammy Awards. Williams won the Grammy Award for Best Instrumental Composition for \"Helena's Theme\". Indiana Jones and the Dial of Destiny premiered out of competition at the 76th Cannes Film Festival on May 18, 2023, and was theatrically released in the United States on June 30, by Walt Disney Studios Motion Pictures. The film received mixed reviews and grossed $384 million worldwide, becoming a box-office bomb due to a lack of wide audience appeal and being one of the most expensive films ever made, with an estimated loss of $143 million for Disney. Plot Toward the end of World War II, Nazis capture Indiana Jones and Oxford archaeologist Basil Shaw as they attempt to retrieve the Lance of Longinus from a castle in the French Alps. Astrophysicist Jürgen Voller informs his superiors the Lance is fake, but he has found half of Archimedes' Dial, an Antikythera mechanism built by the ancient Syracusan mathematician Archimedes which reveals time fissures, thereby allowing for possible time travel. Jones escapes onto a Berlin-bound train filled with looted antiquities and frees Basil. He obtains the Dial piece, and the two escape just before Allied forces derail the train. In 1969, Jones, who is retiring from Hunter College in New York City, has been separated from his wife Marion Ravenwood since their son Mutt's death in the Vietnam War. Jones' goddaughter, archaeologist Helena Shaw, unexpectedly visits and wants to research the Dial. Jones warns that her late father, Basil, became obsessed with studying the Dial before relinquishing it to Jones to destroy, which he never did. As Jones and Helena retrieve the Dial half from the college archives, Voller's accomplices attack them", "\\mathcal {C}}_{YY}^{\\pi }=\\mathbb {E} [\\varphi (Y)\\otimes \\varphi (Y)].} In practical implementations, the kernel chain rule takes the following form C ^ X Y π = C ^ X ∣ Y C ^ Y Y π = Υ ( G + λ I ) − 1 G ~ diag ⁡ ( α ) Φ ~ T {\\displaystyle {\\widehat {\\mathcal {C}}}_{XY}^{\\pi }={\\widehat {\\mathcal {C}}}_{X\\mid Y}{\\widehat {\\mathcal {C}}}_{YY}^{\\pi }={\\boldsymbol {\\Upsilon }}(\\mathbf {G} +\\lambda \\mathbf {I} )^{-1}{\\widetilde {\\mathbf {G} }}\\operatorname {diag} ({\\boldsymbol {\\alpha }}){\\boldsymbol {\\widetilde {\\Phi }}}^{T}} Kernel Bayes' rule In probability theory, a posterior distribution can be expressed in terms of a prior distribution and a likelihood function as Q ( Y ∣ x ) = P ( x ∣ Y ) π ( Y ) Q ( x ) {\\displaystyle Q(Y\\mid x)={\\frac {P(x\\mid Y)\\pi (Y)}{Q(x)}}} where Q ( x ) = ∫ Ω P ( x ∣ y ) d π ( y ) {\\displaystyle Q(x)=\\int _{\\Omega }P(x\\mid y)\\,\\mathrm {d} \\pi (y)} The analog of this rule in the kernel embedding framework expresses the kernel embedding of the conditional distribution in terms of conditional embedding operators which are modified by the prior distribution μ Y ∣ x π = C Y ∣ X π φ ( x ) = C Y X π ( C X X π ) − 1 φ ( x ) {\\displaystyle \\mu _{Y\\mid x}^{\\pi }={\\mathcal {C}}_{Y\\mid X", "the boot process. The secure boot process begins with secure flash, which ensures that unauthorized changes cannot be made to the firmware. Authorized releases of Junos OS carry a digital signature produced by either Juniper Networks directly or one of its authorized partners." ]
[ "Processor traps in user mode", "Processor traps in kernel mode", "Switching from kernel mode to user mode", "Initialization of application processor" ]
['Processor traps in user mode', 'Initialization of application processor']
15
In an x86 multiprocessor with JOS, how many Bootstrap Processors (BSP) is it possible to have at most? And how many Application Processors (AP) at most?
[ "use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. In modern systems, the term has been extended to refer to packages that only deal with one processor, not the whole motherboard. Windows CE and Android also use a BSP. Example The Wind River Systems board support package for the ARM Integrator 920T single-board computer contains, among other things, these elements: A config.h file, which defines constants such as ROM_SIZE and RAM_HIGH_ADRS. A Makefile, which defines binary versions of VxWorks ROM images for programming into flash memory. A boot ROM file, which defines the boot line parameters for the board. A target.ref file, which describes board-specific information such as switch and jumper settings, interrupt levels, and offset bias. A VxWorks image. Various C files, including: flashMem.c—the device driver for the board's flash memory pciIomapShow.c—mapping file for the PCI bus primeCellSio.c—TTY driver sysLib.c—system-dependent routines specific to this board romInit.s—ROM initialization module for the board; contains entry code for images that start running from ROM Additionally the BSP is supposed to perform the following operations: Initialize the processor Initialize the board Initialize the RAM Configure the segments Load and run OS from flash See also BIOS UEFI", "In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific boot loaders, device drivers, in sometimes operating system kernels, and other routines that allow a given embedded operating system, for example a real-time operating system (RTOS), to function in a given hardware environment (a motherboard), integrated with the embedded operating system. The board support package is usually provided by the SoC manufacturer (such as Qualcomm), and it can be modified by the OEM. Software Third-party hardware developers who wish to support a given embedded operating system must create a BSP that allows that embedded operating system to run on their platform. In most cases, the embedded operating system image and software license, the BSP containing it, and the hardware are bundled together by the hardware vendor. BSPs are typically customizable, allowing the user to specify which drivers and routines should be included in the build based on their selection of hardware and software options. For instance, a particular single-board computer might be paired with several peripheral chips; in that case the BSP might include drivers for peripheral chips supported; when building the BSP image the user would specify which peripheral drivers to include based on their choice of hardware. Some suppliers also provide a root file system, a toolchain for building programs to run on the embedded system, and utilities to configure the device (while running) along with the BSP. Many embedded operating system providers provide template BSP's, developer assistance, and test suites to aid BSP developers to set up an embedded operating system on a new hardware platform. History The term BSP has been in use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. In modern systems, the term has been extended to refer to packages that only deal with one processor, not the whole motherboard. Windows CE and Android also use a BSP. Example The Wind River Systems board support package for the ARM", "processors that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures. Code that must interact directly with the hardware, for example in device drivers and interrupt handlers. In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second. Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition. Stand-alone executables that are required to execute without recourse to the run-time components or libraries associated with a high-level language, such as the firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, and security systems. Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264). Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor. Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details. Cryptographic algorithms that must", "even included more than one processor core to work in parallel. Other DSPs from 1995 are the TI TMS320C541 or the TMS 320C80. The fourth generation is best characterized by the changes in the instruction set and the instruction encoding/decoding. SIMD extensions were added, and VLIW and the superscalar architecture appeared. As always, the clock-speeds have increased; a 3 ns MAC now became possible. Modern DSPs Modern signal processors yield greater performance; this is due in part to both technological and architectural advancements like lower design rules, fast-access two-level cache, (E)DMA circuitry, and a wider bus system. Not all DSPs provide the same speed and many kinds of signal processors exist, each one of them being better suited for a specific task, ranging in price from about US$1.50 to US$300. Texas Instruments produces the C6000 series DSPs, which have clock speeds of 1.2 GHz and implement separate instruction and data caches. They also have an 8 MiB 2nd level cache and 64 EDMA channels. The top models are capable of as many as 8000 MIPS (millions of instructions per second), use VLIW (very long instruction word), perform eight operations per clock-cycle and are compatible with a broad range of external peripherals and various buses (PCI/serial/etc). TMS320C6474 chips each have three such DSPs, and the newest generation C6000 chips support floating point as well as fixed point processing. Freescale produces a multi-core DSP family, the MSC81xx. The MSC81xx is based on StarCore Architecture processors and the latest MSC8144 DSP combines four programmable SC3400 StarCore DSP cores. Each SC3400 StarCore DSP core has a clock speed of 1 GHz. XMOS produces a multi-core multi-threaded line of processor well suited to DSP operations, They come in various speeds ranging from 400 to 1600 MIPS. The processors have a multi-threaded architecture that allows up to 8 real-time threads per core, meaning that a 4 core device would support up to 32", "Hoare, inventor of Rust. Ken Thompson, inventor of B and Go. Kenneth E. Iverson, developer of APL, co-developer of J with Roger Hui. Konrad Zuse, designed the first high-level programming language, Plankalkül (which influenced ALGOL 58). Kristen Nygaard, pioneered object-oriented programming, co-invented Simula. Larry Wall, creator of the Perl programming language (see Perl and Raku). Martin Odersky, creator of Scala, and previously a contributor to the design of Java. Martin Richards developed the BCPL programming language, forerunner of the B and C languages. Nathaniel Rochester, inventor of first assembler (IBM 701). Niklaus Wirth, inventor of Pascal, Modula and Oberon. Ole-Johan Dahl, pioneered object-oriented programming, co-invented Simula. Rasmus Lerdorf, creator of PHP. Rich Hickey, creator of Clojure. Robert Gentleman, co-creator of R. Robert Griesemer, co-creator of Go. Robin Milner, inventor of ML, and sharing credit for Hindley–Milner polymorphic type inference. Rob Pike, co-creator of Go, Inferno (operating system) and Plan 9 (operating system) Operating System co-author. Ross Ihaka, co-creator of R. Stanley Cohen, inventor of Speakeasy, which was created with an OOPS, object-oriented programming system, the first instance, in 1964. Stephen Wolfram, creator of Mathematica. Walter Bright, creator of D. Yukihiro Matsumoto, creator of Ruby. See also References Further reading Rosen, Saul, (editor), Programming Systems and Languages, McGraw-Hill, 1967. Sammet, Jean E., Programming Languages: History and Fundamentals, Prentice-Hall, 1969. Sammet, Jean E. (July 1972). \"Programming Languages: History and Future\". Communications of the ACM. 15 (7): 601–610. doi:10.1145/361454.361485. S2CID 2003242. Richard L. Wexelblat (ed.): History of Programming Languages, Academic Press 1981. Thomas" ]
[ "BSP: 0, AP: 4", "BSP: 1, AP: 4", "BSP: 2, AP: 4", "BSP: 0, AP: infinite", "BSP: 1, AP: infinite", "BSP: 2, AP: infinite" ]
['BSP: 1, AP: infinite']
20
Assume a user program executes following tasks. Select all options that will use a system call.
[ "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", ", functions (routines, subroutines, procedures, methods, etc.) are used to encapsulate code and make it reusable. Calling a function involves these steps: 1. Place arguments where the called function can access them. 2. Jump to the function. 3. Acquire storage resources the function needs. 4. Perform the desired task of the function. 5. Communicate the result value back to the calling program. 6. Release any local storage resources. 7. Return control to the calling program. 2.3.1 Jump to the Function/Retun control to the calling program The too simple not working approach A simple (not working) approach for creating functions would be to do this: 19 CHAPTER 2. PART I(B) - ISA, FUNCTIONS, AND STACK - W 1.2 With this approach the function doesn’t know where to return to after being called (back2 or back) For the next part, remember, the Program Counter is distinct from general-purpose registers. It is dedicated to managing the flow of instruction execution, while general registers are used for data manipulation. The Good Approach The right approach involves using the Jump and Link instruction jal, here loading PC + 4 (remem- ber 4 bytes per Instruction) into x1 as a way to come back from the function. 1 main: 2. 3 jal x1, sqrt 4. 5. 6 jal x1, sqrt 1 sqrt: 2. 3. 4 jr x1 Both times x1 was used to store the return adress, and there is a reason for that (Register Conven- tions Sections). 2.3.2 Jump Instructions There are only two core real jump instructions in RISCV, jal (jump and link) and jalr (jump and link register), the rest are pseudo instructions using them. 20 Notes by Ali EL AZDI 2.3.3 Register Conventions Register conventions are rules that dictate how registers are used in a program, here are the ones we’ve seen for now 2.3.4 Back to the good (not so good) approach There’s still a problem with the previous approach, say for example you want to call a function from another function. Here the allocated space for the return address is overwritten by the second function call, and", "program. For that, the OS needs to create a new process and create a new address space to load the program Let’s divide and conquer: • fork() creates a new process (replica) with a copy of its own address space • exec() replaces the old program image with a new program image fork() exec() exit() wait() Why do we need fork() and exec()? 38 Multiple programs can run simultaneously Better utilization of hardware resources Users can perform various operations between fork() and exec() calls to enable various use cases: • To redirect standard input/output: • fork, close/open file descriptors, exec • To switch users: • fork, setuid, exec • To start a process with a different current directory: • fork, chdir, exec fork() exec() exit() wait() Why do we need fork() and exec()? open/close are special file-system calls Set user ID (change user who can be the owner of the process) Go to a specified directory 39 wait(): Waiting for a child process • Child processes are tied to their parent • There exists a hierarchy among processes on forking A parent process uses wait() to suspend its execution until one of its children terminates. The parent process then gets the exit status of the terminated child pid_t wait (int *status); • If no child is running, then the wait() call has no effect at all • Else, wait() suspends the caller until one of its children terminates • Returns the PID of the terminated child process fork() exec() exit() wait() 40 exit(): Terminating a process When a process terminates, it executes exit(), either directly on its own, or indirectly via library code void exit (int status); • The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminat", "• The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminates a misbehaving process • Terminated process exists as a zombie • When a parent process calls wait(), the zombie child is cleaned up or “reaped” • If a parent terminates before child, the child becomes an orphan • init (pid: 1) process adopts orphans and reaps them fork() exec() exit() wait() P1 C1 wait() P1 reaps C1 Waiting for children to die... 42 • Scenarios under which a process terminates • By calling exit() itself • OS terminates a misbehaving process • Terminated process exists as a zombie • When a parent process calls wait(), the zombie child is cleaned up or “reaped” • If a parent terminates before child, the child becomes an orphan • init (pid: 1) process adopts orphans and reaps them fork() exec() exit() wait() P1 init C1 wait() P1 reaps C1 P1 C1 init eventually reaps C1 43 Process state transition (full lifecycle) Running Ready Blocked Descheduled Scheduled I/O done A process can be in one of several states during its life cycle: • Running • Ready • Blocked • Zombie I/O start fork() exit() A tree of processes 44 • Each process has a parent process • init is the first process (pid: 1) without any parent process • A process can have many child processes • Each process again can have child processes L02.3: fork() illustrated 3 examples, step-by-step CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU 46 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <unistd.h> 4 5 int main(int argc, char *arg", "6 Task switching mechanism: context switch • The OS can be in the kernel mode, it cannot return back to the same process • Process is finished or must be terminated (e.g., invalid operations) • Process did a system call and it is waiting for it to complete (IO operation) • The OS does not want to run the same process • The process has run for too long • There are other processes present and they should be scheduled The OS performs a context switch to stop running one process and start running another, i.e., switch from one process to another 7 Task switching mechanism: context switch 8 Reminder: Process state transitions Running Ready Blocked Descheduled Scheduled I/O done I/O start 9 Context switch A context switch is a mechanism that allows the OS to store the current process state and switch to some other, previously stored context. • The context of the process is represented in the process control block (PCB) • The OS maintains the PCB for each process • The process control block (PCB) includes hardware registers • All registers available to user code (e.g. x86 general registers) • All process-specific registers (e.g. on x86 cr3 -- the base of the page table) • Stored in the PCB when the process is not currently running 10 Context switch procedure The OS does the following operations during the context switch: 1. Saves the running process’ execution state in the PCB 2. Selects the next thread 3. Restores the execution state of the next process 4. Passes the control using return from trap to resume next process Process 0 OS (CPU0) Process 1 Interrupt / system call Save state into PCB0 Reload state from PCB1 Save state into PCB1 Reload state from PCB0 Interrupt / system call executing executing de-sched. executing Context switch Context switch PCB: Process control block Note: the de-scheduled process is in either Ready or Blocked state de-sched. de-sched. 11 Preemption for process scheduling* • A process may never give up control, exits, or performs IO • This leads to the process running forever and the OS cannot gain control • OS sets a timer before scheduling a process • Hardware generates an" ]
[ "Read the user's input \"Hello world\" from the keyboard.", "Write \"Hello world\" to a file.", "Encrypt \"Hello world\" by AES.", "Send \"Hello world\" to another machine via Network Interface Card." ]
['Read the user\'s input "Hello world" from the keyboard.', 'Write "Hello world" to a file.', 'Send "Hello world" to another machine via Network Interface Card.']
22
What is the content of the inode?
[ "ed to the file by the file system Note Inodes are unique for a file system but not globally Recycled after deletion An inode contains metadata of a file Permissions length access time Location of data block and indirection block Each file ha exactly one associated inode OS view Inode persistent ID Storage space is split into inode table and data storage Files are statically allocated Require inode number to access file content Inode table Metadata location size location size location size location size location size data F data F data F Storage space is split into inode table and data storage Files are statically allocated Require inode number to access file content Idea Use a dedicated place at the beginning of the storage medium mostly initial block Inode table Metadata location size location size location size location size location size data F data F data F Inode and device number persistent ID Path human readable File descriptor process view The file abstraction perspective Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets Each file ha a human readable format file name Humans are better at remembering name than number Files are organized into hierarchy of directory pathame Humans like to organize thing logically A filename is unique locally to a directory a full pathname is globally unique Modern file system mostly use untyped file array of byte File is a sequence of byte OS file system doe neither understand nor care about content User view file name A special file directory store mapping between file name and inodes Extend to hierarchy Mark if a file map to a regular file Access tmp test txt in step tmp test txt content Path inode Metadata location size location size location size location size location size tmp etc test txt Hello world Inode doe NOT contain the file name Each directory is a file stored like regular file Flag in the Inode separate directory from regular file Flag restricts API to process e g cannot write to a directory Contains array of filename Indode Multiple file name can map to the same inode inode ha a reference count Called shortcut in Windows or hard link in UNIX Linux Inodes and Directories A special file that store the mapping between human friendly name of", "de Metadata location size location size location size location size location size tmp etc test txt Hello world Inode doe NOT contain the file name Each directory is a file stored like regular file Flag in the Inode separate directory from regular file Flag restricts API to process e g cannot write to a directory Contains array of filename Indode Multiple file name can map to the same inode inode ha a reference count Called shortcut in Windows or hard link in UNIX Linux Inodes and Directories A special file that store the mapping between human friendly name of file and their inode number Contains subdirectory List of directory file indicates the root typically inode The path abstraction Directory bin l home sanidhya linuxbrew map to the current directory map to the parent directory More about directory Nine character after d or are permission bit rwx for owner group everyone Owner can read and write group and others can just read x set on a file mean that the file is executable x set on a directory user group others are allowed to cd to that directory Permission bit Inode and device number persistent ID Path name human readable File descriptor process view The file abstraction perspective Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets The combination of file name and inode device IDs are sufficient to implement persistent storage Drawback constant lookup from file name to inode device IDs are costly Idea do expensive tree traversal once store final inode device number in a per process table Also keep additional information such a file offset Per process table of open file Use linear number fd reuse when freed Process view file descriptor int fd open out txt return read fd buf Example Operations on a file fd table offset inode X device Y location A size B Each process ha it fd table are mapped to STDIN STDOUT and STDERR fd is and inode is X read update the offset to from int fd open mydir out txt return read fd buf int fd open out txt return Example Operations on a file fd table offset inode X device Y location A size B Each process ha it fd table are mapped to STDIN STDOUT and", "into hierarchies of directories: pathame • Humans like to organize things logically • A filename is unique locally to a directory; a full pathname is globally unique • Modern file systems mostly use untyped files: array of bytes • File is a sequence of bytes • OS/file system does neither understand nor care about contents 23 User view: file name • A special file (directory) stores mapping between file names and inodes • Extend to hierarchy: Mark if a file maps to a regular file • Access ‘/tmp/test.txt’ in 3 steps: ‘tmp’, ‘test.txt’, contents 24 Path → inode Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 ‘tmp’: 2, ‘etc’: 15,... ‘test.txt’: 3 ‘Hello world!’ • Inode does NOT contain the file name • Each directory is a file (stored like regular files) • Flag in the Inode separates directories from regular files • Flag restricts API to processes (e.g., cannot write to a directory) • Contains array of { filename, Indode} • Multiple file names can map to the same inode • → inode has a reference count • Called shortcut in Windows, or hard link in UNIX/Linux 25 Inodes and Directories • A special file that stores the mapping between human-friendly names of files and their inode numbers • Contains subdirectories: • List of directories, files • / indicates the root (typically inode:1) 26 The path abstraction: Directory / bin ls home sanidhya linuxbrew • “.” maps to the current directory • “.” maps to the parent directory 27 More about directories • Nine characters (after ‘d’ or ‘.’) are permission bits • rwx for owner, group, everyone • Owner can read and write; group and others can just read • x set on a file means that the file is executable • x set on a directory: user/group/others are allowed to cd to that directory 28 Permission bits 1. Inode and device number (pers", "inode content are not on consecutive locations on disk. 54 Batching operations A process must block on a read operation. But what about a write? Idea: Delay all write operations ●perform them asynchronously (typical: wait at most 30 seconds) ●Reorder operations to maximize throughput (insert within the elevator algorithm) Consequence: content will be lost if the OS crashes 55 Delaying operations ●Multi-level indexing was introduced with early UNIX systems ●Early 1990s : introduction of log-structured filesystems • Insight: because of caching and increased memory sizes, most I/O is actually writes, not reads. • Idea: all writes should be to a log... then reconstruct file from the log • Today, modern file systems leverage ideas from log-structured file systems for meta-data operations • e.g. ext4 on Linux 56 Modern file systems Operating Systems wear multiple hats ●General-purpose abstractions and implementations ●Good performance for a wide range of operations. 57 Alternative view point : bypass the kernel Alternative design ●Expose resource directly to applications ●“Raw IO” -- direct access to the disk ○No file system, no buffer cache, no indirection Approach favored by high-end transactional databases ●Caching, logging, buffering, indexing, etc is all done by the database application, not the operating system 58 Summary • Overlap IO and computation as much as possible! • Use interrupts • Use DMA • Driver classes provide common interface • Storage: read/write/seek of blocks • File system design is informed by IO performance • Eliminate IO, batch IO, delay IO • Carefully schedule IOs on slow devices (minimize seek time on rotating HDD)", "Path (human readable) 3. File descriptor (process view) 18 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Low-level unique ID assigned to the file by the file system • Note: Inodes are unique for a file system but not globally • Recycled after deletion • An inode contains metadata of a file • Permissions, length, access times • Location of data blocks and indirection blocks • Each file has exactly one associated inode 19 OS view: Inode (persistent ID) • Storage space is split into inode table and data storage • Files are statically allocated • Require inode number to access file content 20 Inode table Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 data F1 data F2 data F3 • Storage space is split into inode table and data storage • Files are statically allocated • Require inode number to access file content Idea: Use a dedicated place at the beginning of the storage media, mostly initial block 21 Inode table Metadata location size=18 location size location size=12 location size=12 location size 0 1 2 3 4 data F1 data F2 data F3 1. Inode and device number (persistent ID) 2. Path (human readable) 3. File descriptor (process view) 22 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Each file has a human readable format: file name • Humans are better at remembering names than numbers • Files are organized into hierarchies of directories: pathame • Humans like to organize things logically • A filename is unique locally to a directory; a full pathname is globally unique • Modern file systems mostly use untyped files: array of bytes • File is a sequence of bytes • OS/file system does neither understand nor care about contents 23 User view: file name • A special file (directory) stores mapping between file names and inodes • Extend to hierarchy: Mark if a file maps to a regular file • Access ‘" ]
[ "Filename", "File mode", "Hard links counter", "String with the name of the owner", "File size", "Capacity of the whole file system", "Index structure for data blocks" ]
['File mode', 'Hard links counter', 'File size', 'Index structure for data blocks']
23
In x86, what are the possible ways to transfer arguments when invoking a system call? For example, in the following code, string and len are sys_cputs’s arguments.
[ "a letter through the postal system, you have to follow certain rules. - You need to put your letter in an envelope and write a correct address on a particular part of the envelope. - You need to drop your letter in a mailbox. These rules are the “interface” between you and the postal system -- your only way of using the postal system successfully. Similarly, when a process wants to send a message over the Internet, it has to use certain syscalls in a certain way. So, these syscalls are the “interface” between the process and the Internet, this is why they are called an “Application Programming Interface”. You will learn how to write code that uses this API in the second half of your project with Jean-Cédric. → What happens when a process does a system call? Alice’s computer IO controller DMA controller NIC NIC controller memory data data data CPU user mode kernel mode Process running Makes syscall More network functions OS interacts with NIC Syscall handler runs, calls network functions NIC does its thing Embeds data into physical signal NIC interacts with physical communication medium data Consider a general-purpose computer; its CPU, main memory, and Network Interface Card (NIC). The NIC has a NIC controller (same way a disk has a disk controller). Somewhere in there, there is also the I/O controller (through which the CPU communicates with the peripheral devices), and the DMA controller (which manages data transfer between main memory and peripheral devices). Suppose there is a process running (the CPU is in user mode, executing the instructions of this process). At some point, the process creates some data, and it 48 wants to send it somewhere over the Internet. To do that, it makes a network syscall (the instructions of the process include a trap instruction; when the CPU executes that instruction, it switches to kernel mode). As a result, the syscall handler for the given syscall starts running. This invokes network-related functions, which typically add metadata (the yellow and green chunks) to the data created by the process. When the OS finishes preparing the data, it interacts with the NIC and, as a result, the data is copied from main memory into the NIC’", "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", "strings (no escaping), or to disable or enable variable interpolation, but has other uses, such as distinguishing character sets. Most often this is done by changing the quoting character or adding a prefix or suffix. This is comparable to prefixes and suffixes to integer literals, such as to indicate hexadecimal numbers or long integers. One of the oldest examples is in shell scripts, where single quotes indicate a raw string or \"literal string\", while double quotes have escape sequences and variable interpolation. For example, in Python, raw strings are preceded by an r or R – compare 'C:\\\\Windows' with r'C:\\Windows' (though, a Python raw string cannot end in an odd number of backslashes). Python 2 also distinguishes two types of strings: 8-bit ASCII (\"bytes\") strings (the default), explicitly indicated with a b or B prefix, and Unicode strings, indicated with a u or U prefix. while in Python 3 strings are Unicode by default and bytes are a separate bytes type that when initialized with quotes must be prefixed with a b. C#'s notation for raw strings is called @-quoting. While this disables escaping, it allows double-up quotes, which allow one to represent quotes within the string: C++11 allows raw strings, unicode strings (UTF-8, UTF-16, and UTF-32), and wide character strings, determined by prefixes. It also adds literals for the existing C++ string, which is generally preferred to the existing C-style strings. In Tcl, brace-delimited strings are literal, while quote-delimited strings have escaping and interpolation. Perl has a wide variety of strings, which are more formally considered operators, and are known as quote and quote-like operators. These include both a usual syntax (fixed delimiters) and a generic syntax, which allows a choice of delimiters; these include: REXX uses suffix characters to specify characters or strings using their hexadecimal or binary code. E.g., all yield the space character, avoiding the function call X2C(20). Str", "strings (no escaping), or to disable or enable variable interpolation, but has other uses, such as distinguishing character sets. Most often this is done by changing the quoting character or adding a prefix or suffix. This is comparable to prefixes and suffixes to integer literals, such as to indicate hexadecimal numbers or long integers. One of the oldest examples is in shell scripts, where single quotes indicate a raw string or \"literal string\", while double quotes have escape sequences and variable interpolation. For example, in Python, raw strings are preceded by an r or R – compare 'C:\\\\Windows' with r'C:\\Windows' (though, a Python raw string cannot end in an odd number of backslashes). Python 2 also distinguishes two types of strings: 8-bit ASCII (\"bytes\") strings (the default), explicitly indicated with a b or B prefix, and Unicode strings, indicated with a u or U prefix. while in Python 3 strings are Unicode by default and bytes are a separate bytes type that when initialized with quotes must be prefixed with a b. C#'s notation for raw strings is called @-quoting. While this disables escaping, it allows double-up quotes, which allow one to represent quotes within the string: C++11 allows raw strings, unicode strings (UTF-8, UTF-16, and UTF-32), and wide character strings, determined by prefixes. It also adds literals for the existing C++ string, which is generally preferred to the existing C-style strings. In Tcl, brace-delimited strings are literal, while quote-delimited strings have escaping and interpolation. Perl has a wide variety of strings, which are more formally considered operators, and are known as quote and quote-like operators. These include both a usual syntax (fixed delimiters) and a generic syntax, which allows a choice of delimiters; these include: REXX uses suffix characters to specify characters or strings using their hexadecimal or binary code. E.g., all yield the space character, avoiding the function call X2C(20). Str", "as arguments a pointer to the message, its length, and (within a special data structure) the destination IP address and destination port number. In response, the transport layer starts putting together a packet: the message that the process is sending [click], the destination IP address [click] and port number [click] that the process passed as arguments through the sendto syscall, and the source IP address [click] and port number [click] that are associated with this socket. - To be precise, the transport layer creates only the transport-layer header (which contains the source and destination port numbers, not the source and destination IP addresses), but it still keeps track of the source and destination IP addresses, because it needs to provide them to the network layer, which will create the network-layer header. - If the process does not need the socket any more, it makes a “close” sys call [click], i.e., asks the transport layer to close it. In response, the transport layer deletes [click] the socket. EPFL CS202 Computer Systems Process R 5 int sockedId = socket (..., UDP); int ret = bind (socketId, [IP address: 5.5.5.5, port: 5000],...); for process R IP address: 5.5.5.5 port: 5000 UDP socket recvfrom (socketId, message, length,...); message Source port: 1000 Dest. port: 5000 Source IP address: 1.1.1.1 Dest. IP address: 5.5.5.5 close (socketId ); application layer transport layer Now consider the receiving end: A process R [click] wants to use UDP to receive a message from a remote process: - First, the process asks the transport layer to open a UDP socket [click]. In response, the transport layer creates [click] a UDP socket and associates it with this process. - Second, the process asks the transport layer to bind [click] the socket to a particular local IP address and port number. In response, the transport layer adds [click, click] this information to the socket. - At this point, the process is ready to receive a message through this socket. To do this, it makes a “recvfrom” syscall [click" ]
[ "Stack", "Registers", "Instructions" ]
['Stack', 'Registers']
26
What is the worst case complexity of listing files in a directory? The file system implements directories as hash-tables.
[ "In computer science, a hash list is typically a list of hashes of the data blocks in a file or set of files. Lists of hashes are used for many different purposes, such as fast table lookup (hash tables) and distributed databases (distributed hash tables). A hash list is an extension of the concept of hashing an item (for instance, a file). A hash list is a subtree of a Merkle tree. Root hash Often, an additional hash of the hash list itself (a top hash, also called root hash or master hash) is used. Before downloading a file on a p2p network, in most cases the top hash is acquired from a trusted source, for instance a friend or a web site that is known to have good recommendations of files to download. When the top hash is available, the hash list can be received from any non-trusted source, like any peer in the p2p network. Then the received hash list is checked against the trusted top hash, and if the hash list is damaged or fake, another hash list from another source will be tried until the program finds one that matches the top hash. In some systems (for example, BitTorrent), instead of a top hash the whole hash list is available on a web site in a small file. Such a \"torrent file\" contains a description, file names, a hash list and some additional data. Applications Hash lists can be used to protect any kind of data stored, handled and transferred in and between computers. An important use of hash lists is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and to check that the other peers do not \"lie\" and send fake blocks. Usually a cryptographic hash function such as SHA-256 is used for the hashing. If the hash list only needs to protect against unintentional damage unsecured checksums such as CRCs can be used. Hash lists are better than a simple hash of the entire file since, in the case of a data block being damaged, this is noticed, and only the damaged block needs to be redownloaded. With", "of a fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. hash table In computing, a hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. heap A specialized tree-based data structure which is essentially an almost complete tree that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the \"top\" of the heap (with no parents) is called the root node. heapsort A comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum. human-computer interaction (HCI) Researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. I identifier In computer languages, identifiers are tokens (also called symbols) which name language entities. Some of the kinds of entities an identifier might denote include variables, types", "syntax. A table is a set of key and data pairs, where the data is referenced by key; in other words, it is a hashed heterogeneous associative array. Tables are created using the {} constructor syntax. Tables are always passed by reference (see Call by sharing). A key (index) can be any value except nil and NaN, including functions. A table is often used as structure (or record) by using strings as keys. Because such use is very common, Lua features a special syntax for accessing such fields. By using a table to store related functions, it can act as a namespace. Tables are automatically assigned a numerical key, enabling them to be used as an array data type. The first automatic index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed). A numeric key 1 is distinct from a string key \"1\". The length of a table t is defined to be any integer index n such that t[n] is not nil and t[n+1] is nil; moreover, if t[1] is nil, n can be zero. For a regular array, with non-nil values from 1 to a given n, its length is exactly that n, the index of its last value. If the array has \"holes\" (that is, nil values between other non-nil values), then #t can be any of the indices that directly precedes a nil value (that is, it may consider any such nil value as the end of the array). A table can be an array of objects. Using a hash map to emulate an array is normally slower than using an actual array; however, Lua tables are optimized for use as arrays to help avoid this issue. Metatables Extensible semantics is a key feature of Lua, and the metatable concept allows powerful customization of tables. The following example demonstrates an \"infinite\" table. For any n, fibs[n] will give the n-th Fibonacci number using dynamic programming and memoization. Object-oriented programming Although Lua does not have a built-in concept of classes, object-oriented programming can be", "File verification is the process of using an algorithm for verifying the integrity of a computer file, usually by checksum. This can be done by comparing two files bit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate a hash of the copied file and comparing that to the hash of the original file. Integrity verification File integrity can be compromised, usually referred to as the file becoming corrupted. A file can become corrupted by a variety of ways: faulty storage media, errors in transmission, write errors during copying or moving, software bugs, and so on. Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions, hash collisions may result in false positives, but the likelihood of collisions is often negligible with random corruption. Authenticity verification It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such as viruses or backdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to be collision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called a preimage attack. For this purpose, cryptographic hash functions are employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively, digital signatures can be employed to assure tamper resistance. File formats A checksum file is a small file that contains the checksums of other files. There are a few well-known checksum file formats. Several utilities, such as md5deep, can use such checksum files to automatically verify an entire directory", "-peer network are received undamaged and unaltered, and to check that the other peers do not \"lie\" and send fake blocks. Usually a cryptographic hash function such as SHA-256 is used for the hashing. If the hash list only needs to protect against unintentional damage unsecured checksums such as CRCs can be used. Hash lists are better than a simple hash of the entire file since, in the case of a data block being damaged, this is noticed, and only the damaged block needs to be redownloaded. With only a hash of the file, many undamaged blocks would have to be redownloaded, and the file reconstructed and tested until the correct hash of the entire file is obtained. Hash lists also protect against nodes that try to sabotage by sending fake blocks, since in such a case the damaged block can be acquired from some other source. Hash lists are used to identify CSAM online. Protocols using hash lists Rsync Zsync Bittorrent See also Hash tree Hash table Hash chain Ed2k: URI scheme, which uses an MD4 top hash of an MD4 hash list to uniquely identify a file Cryptographic hash function List" ]
[ "$O(1)$", "$O(number of direntries in the directory)$", "$O(size of the file system)$", "$O(number of direntries in the file system)$", "$O(log(number of direntries in the directory))$" ]
['$O(number of direntries in the directory)$']
40
In JOS, suppose one Env sends a page to another Env. Is the page copied?
[ "tag probability e pronunciation b oa r d z etc set of record identi ed by a reference e g a database with primary key Words Tokens Introduction Words Tokens Lexicon N gram Conclusion c EPFL J C Chappelier Field representation External v Internal structure i e serialization v memory representation Internal structure suited for an ef cient implementation of the two access method by value and by reference for each eld not necessarily the same for all eld not even necessarily the same for the two method of a given eld Words Tokens Introduction Words Tokens Lexicon N gram Conclusion c EPFL J C Chappelier eld by value access f o value by reference access fo reference reference surface form PoS lemma prononc board Ns board Ns b oa r d board Np board Ns b oa r d z y Vx y Vx f l i y Ns y Ns f l i by valuesurface y by refPoS Np All PoS tag for fly by refPoS", "and Bob have already shared a secret key [click] “out of band,” i.e., without using the network. E.g., one may have given it physically to the other. Alice has a message (a “plaintext”) [click] for Bob: - She provides it as input to her encryption algorithm (together with the shared key) and obtains a “ciphertext” [click] (a “jumbled” version of the plaintext). - Alice sends the ciphertext to Bob [click]. - Bob provides the ciphertext as input to his decryption algorithm and obtains the plaintext [click]. Now, suppose Eve [click] is an evil packet switch that forward’s Alice’s traffic to Bob. She sees the packet(s) carrying Alice’s message, but she cannot read the message – as long as she does not know the shared secret key. This is how to achieve confidentiality using *symmetric-key cryptography*, where Alice and Bob have shared a secret key. 60 encryption algorithm decryption algorithm Bob-key+ Bob-key- Alice Bob plaintext ciphertext plaintext ciphertext Eve Confidentiality can also be achieved through *asymmetric-key cryptography*. In asymmetric-key cryptography, each entity has two keys: a public (which may be known by everyone) and a private one (which should be known only by the entity itself). When someone encrypts something with Bob’s public key, Bob decrypts it with his private key. Conversely, when Bob encrypts something with his private key, somebody else can decrypt it with Bob’s public key. The setup is similar to the previous slide. However, Alice and Bob have not shared a secret key; Bob has his private key (Bob-key-), and Alice knows Bob’s public key (Bob-key+). [click] Alice has a plaintext for Bob: - She provides it as input to her encryption algorithm (together with Bob’s public key) and obtains a ciphertext. - Alice sends the ciphertext to Bob. - Bob provides the ciphertext as input to his decryption algorithm (together with his private key) and obtains the plaintext. Once again, Eve cannot read the", "Kapralov and Ola Svensson EPFL Notes by Joachim Favre Quantum science and engineering master Semester Autumn I made this document for my own use but I thought that typed note might be of interest to others There are mistake it is impossible not to make any If you find some please feel free to share them with me grammatical and vocabulary error are of course also welcome You can contact me at the following e mail address joachim favre epfl ch If you did not get this document through my GitHub repository then you may be interested by the fact that I have one on which I put those typed note and their LATEX code Here is the link make sure to read the README to understand how to download the file you re interested in http github com JoachimFavre EPFLNotesIN Please note that the content doe not belong to me I have made some structural change reworded some part and added some personal note but the wording and explanation come mainly from the Professor and from the book on which they based their course I think it is worth mentioning that in order to get these note typed up I took my note in LATEX during the course and then made some correction I do not think typing handwritten note is doable in term of the amount of work To take note in LATEX I took my inspiration from the following link written by Gilles Castel If you want more detail feel free to contact me at my e mail address mentioned hereinabove http castel dev post lecture note I would also like to specify that the word trivial and simple do not have in this course the definition you find in a dictionary We are at EPFL nothing we do is trivial Something trivial is something that a random person in the street would be able to do In our context understand these word more a simpler than the rest Also it is okay if you take a while to understand something that is said to be trivial especially a I love using this word everywhere hihi Since you are reading this I will give you a little advice Sleep is a much more powerful tool than you may imagine so do not neglect a good night of sleep in favour of studying especially the night before an exam I wish you to have fun during your exam Version To Gilles Castel whose work ha inspired me this note taking method Rest in peace nobody deserves to go so young Contents Summary by lecture Greed", "I made this document for my own use but I thought that typed note might be of interest to others There are mistake it is impossible not to make any If you find some please feel free to share them with me grammatical and vocabulary error are of course also welcome You can contact me at the following e mail address joachim favre epfl ch If you did not get this document through my GitHub repository then you may be interested by the fact that I have one on which I put those typed note and their LATEX code Here is the link make sure to read the README to understand how to download the file you re interested in http github com JoachimFavre EPFLNotesIN Please note that the content doe not belong to me I have made some structural change reworded some part and added some personal note but the wording and explanation come mainly from the Professor and from the book on which they based their course I think it is worth mentioning that in order to get these note typed up I took my note in LATEX during the course and then made some correction I do not think typing handwritten note is doable in term of the amount of work To take note in LATEX I took my inspiration from the following link written by Gilles Castel If you want more detail feel free to contact me at my e mail address mentioned hereinabove http castel dev post lecture note I would also like to specify that the word trivial and simple do not have in this course the definition you find in a dictionary We are at EPFL nothing we do is trivial Something trivial is something that a random person in the street would be able to do In our context understand these word more a simpler than the rest Also it is okay if you take a while to understand something that is said to be trivial especially a I love using this word everywhere hihi Since you are reading this I will give you a little advice Sleep is a much more powerful tool than you may imagine so do not neglect a good night of sleep in favour of studying especially the night before an exam I wish you to have fun during your exam Version To Gilles Castel whose work ha inspired me this note taking method Rest in peace nobody deserves to go so young Contents Summary by lecture Structural complexity Recalls P complexity class NP complexity class NP completeness Cook Levin theorem Time", "? The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication. The sender encodes and sends a packet of information. The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again. This two-way process continues until all the packets in the message have been transferred successfully. Certain networks, such as ones used for cellular wireless broadcasting, do not have a feedback channel. Applications on these networks still require reliability. Fountain codes in general, and LT codes in particular, get around this problem by adopting an essentially one-way communication protocol. The sender encodes and sends packet after packet of information. The receiver evaluates each packet as it is received. If there is an error, the erroneous packet is discarded. Otherwise the packet is saved as a piece of the message. Eventually the receiver has enough valid packets to reconstruct the entire message. When the entire message has been received successfully the receiver signals that transmission is complete. As mentioned above, the RaptorQ code specified in IETF RFC 6330 outperforms an LT code in practice. LT encoding The encoding process begins by dividing the uncoded message into n blocks of roughly equal length. Encoded packets are then produced with the help of a pseudorandom number generator. The degree d, 1 ≤ d ≤ n, of the next packet is chosen at random. Exactly d blocks from the message are randomly chosen. If Mi is the i-th block of the message, the data portion of the next packet is computed as M i 1 ⊕ M i 2 ⊕ ⋯ ⊕ M i d {\\displaystyle M_{i_{1}}\\oplus M_{i_{2}}\\oplus \\cdots \\oplus M_{i_{d}}\\,} where {i1, i2,..., id} are the randomly chosen indices for the d blocks included in this packet. A prefix is appen" ]
[ "Yes", "No" ]
['No']
41
In JOS and x86, please select all valid options for a system call.
[ "extra wv2 x2x xalan-java xbill xbitmaps xcb-proto xclip xerces2-java xf86-input-acecad xf86-input-aiptek xf86-input-joystick xf86-input-keyboard xf86-input-mouse xf86-input-synaptics xf86-input-vmmouse xf86-input-void xf86-video-apm xf86-video-ark xf86-video-ast xf86-video-chips xf86-video-cirrus xf86-video-dummy xf86-video-fbdev xf86-video-glint xf86-video-i128 xf86-video-i740 xf86-video-mach64 xf86-video-mga xf86-video-neomagic xf86-video-nv xf86-video-r128 xf86-video-rendition xf86-video-s3 xf86-video-s3virge xf86-video-savage xf86-video-siliconmotion xf86-video-sis xf86-video-sisusb xf86-video-tdfx xf86-video-trident xf86-video-tseng xf86-video-unichrome xf86-video-v4l xf86-video-vesa xf86-video-vmware xf86-video-voodoo xf86-video-xgi xf86-video-xgixp xf86dgaproto xf86vidmodeproto xfce4-taskmanager xfwm4-themes xineramaproto xkeyboard-config xmahjongg xorg-apps xorg-bdftopcf xorg-setxkbmap xorg-xcalc xorg-xcmsdb xorg-xdriinfo xorg-xev xorg-xkbcomp xorg-xkbevd xorg-xlsatoms xorg-xlsclients xorg-xlsfonts xorg-xmodmap xorg-xrefresh xorg-xset xorg", "–16 bytes to set up the values to be loaded. More common is to use the loop setup instruction (represented in assembly as either LOOP with pseudo-instruction LOOP_BEGIN and LOOP_END, or in a single line as LSETUP), which optionally initializes LCx and sets LTx and LBx to the desired values. This only requires 4–6 bytes, but can only set LTx and LBx within a limited range relative to where the loop setup instruction is located. x86 The x86 assembly language REP prefixes implement zero-overhead loops for a few instructions (namely MOVS/STOS/CMPS/LODS/SCAS). Depending on the prefix and the instruction, the instruction will be repeated a number of times with (E)CX holding the repeat count, or until a match (or non-match) is found with AL/AX/EAX or with DS:[(E)SI]. This can be used to implement some types of searches and operations on null-terminated strings.", "Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time On the x86, the return will pop the program counter, flags, and general-purpose registers off the per-process stack 32 Putting everything together for a system call OS Kernel mode User mode Process trap Time Process 1. A system call is a trap instruction 2. OS saves registers to per-process stack 3. Change mode from Ring 3 to Ring 0 1. Execute privileged operations 1. Change mode from Ring 0 to Ring 3 2. Restore the state of the process by popping registers in the return from trap Return- from- trap L03.4: Traps and Interrupts CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU 34 Traps/Exceptions • Traps are also referred as exceptions • Handle internal program errors • Overflow, division by zero, accessing not allowed memory region • Exceptions are produced by the CPU while executing instructions • Exceptions are synchronous: CPU invokes them only after terminating the invocation of an instruction Question How does a trap know which code to run in the OS? 35 36 OS configures hardware at boot time During boot... • The OS tells hardware what code to run when certain exceptional events occur • OS configures specific handlers that hardware remembers • Hardware then know what to do when certain exceptional events occur • System call Code to run when a hard disk interrupt occurs Code to run when a keyboard interrupt occurs Code to run for a system call Trap table Trap entries 37 Requesting OS services using system call numbers Code to run for a system call Trap table Trap entries Only one handler routine for system call, but multiples of system calls are possible! • Each system call has a specific number •", "The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to the OS, meanwhile the execution of the process is suspended OS Kernel mode User mode Process Process System call issued Return from system call Time 27 System calls System calls exposes key functionalities: • Creating and destroying processes • Accessing the file system • Communicating with other processes • Allocating memory Most OSes provide hundreds of system calls • Linux currently has more than 300+ 28 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time (Ring 3 → Ring 0) • Now, privileged operations can be performed Trap is a signal raised by a process instructing the OS to perform some functionality immediately 29 Steps of system call execution To execute a system call: • A process executes a special trap instruction • CPU jumps into the kernel mode and it raises the privilege level at the same time: (Ring 3 → Ring 0) • Now, privileged operations can be performed • When finished, the OS calls a special return-from-trap instruction • Returns to the calling process and lowers the privilege level at the same time: (Ring 0 → Ring 3) • Now, privileged operations cannot be performed 30 Preparing for a system call: save a process’ states OS Kernel mode User mode Process Save the states of the process On the x86, the trap will push the program counter, flags, and general-purpose registers onto a per-process stack trap Time 31 Completing a system call: restore a process’ states OS Kernel mode User mode return-from-trap Restore the states of the process Process Time", "86-video-dummy xf86-video-fbdev xf86-video-glint xf86-video-i128 xf86-video-i740 xf86-video-mach64 xf86-video-mga xf86-video-neomagic xf86-video-nv xf86-video-r128 xf86-video-rendition xf86-video-s3 xf86-video-s3virge xf86-video-savage xf86-video-siliconmotion xf86-video-sis xf86-video-sisusb xf86-video-tdfx xf86-video-trident xf86-video-tseng xf86-video-unichrome xf86-video-v4l xf86-video-vesa xf86-video-vmware xf86-video-voodoo xf86-video-xgi xf86-video-xgixp xf86dgaproto xf86vidmodeproto xfce4-taskmanager xfwm4-themes xineramaproto xkeyboard-config xmahjongg xorg-apps xorg-bdftopcf xorg-setxkbmap xorg-xcalc xorg-xcmsdb xorg-xdriinfo xorg-xev xorg-xkbcomp xorg-xkbevd xorg-xlsatoms xorg-xlsclients xorg-xlsfonts xorg-xmodmap xorg-xrefresh xorg-xset xorg-xwd xorg-xwininfo xorg-xwud xpdf-arabic xpdf-chinese-simplified xpdf-chinese-traditional xpdf-cyrillic xpdf-greek xpdf-hebrew xpdf-japanese xpdf-korean xpdf-latin2 xpdf-thai xpdf-turkish xsnow yasm yelp-xsl zd1211-firmware zile zope-interface konq-plugins tidyhtml python-nose python-pip python-virtualenv gnome-python-desktop apache-ant junit perl-xml" ]
[ "A system call is for handling interrupts like dividing zero error and page fault.", "In user mode, before and after a system call instruction(such as int 0x30), the stack pointer(esp in x86) stays the same.", "During the execution of a system call, when transfering from user mode to kernel mode, the stack pointer(esp in x86) stays the same." ]
['In user mode, before and after a system call instruction(such as int 0x30), the stack pointer(esp in x86) stays the same.']
44
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
[ "Run-to-completion scheduling or nonpreemptive scheduling is a scheduling model in which each task runs until it either finishes, or explicitly yields control back to the scheduler. Run-to-completion systems typically have an event queue which is serviced either in strict order of admission by an event loop, or by an admission scheduler which is capable of scheduling events out of order, based on other constraints such as deadlines. Some preemptive multitasking scheduling systems behave as run-to-completion schedulers in regard to scheduling tasks at one particular process priority level, at the same time as those processes still preempt other lower priority tasks and are themselves preempted by higher priority tasks. See also Preemptive multitasking Cooperative multitasking", "background - where foreground processes are given high priority) to understand non pre-emptive and pre-emptive multilevel scheduling in depth with FCFS algorithm for both the queues: See also Fair-share scheduling Lottery scheduling", "Fixed-priority preemptive scheduling is a scheduling system commonly used in real-time systems. With fixed priority preemptive scheduling, the scheduler ensures that at any given time, the processor executes the highest priority task of all those tasks that are currently ready to execute. The preemptive scheduler has a clock interrupt task that can provide the scheduler with options to switch after the task has had a given period to execute—the time slice. This scheduling system has the advantage of making sure no task hogs the processor for any time longer than the time slice. However, this scheduling scheme is vulnerable to process or thread lockout: since priority is given to higher-priority tasks, the lower-priority tasks could wait an indefinite amount of time. One common method of arbitrating this situation is aging, which gradually increments the priority of waiting processes and threads, ensuring that they will all eventually execute. Most real-time operating systems (RTOSs) have preemptive schedulers. Also turning off time slicing effectively gives you the non-preemptive RTOS. Preemptive scheduling is often differentiated with cooperative scheduling, in which a task can run continuously from start to end without being preempted by other tasks. To have a task switch, the task must explicitly call the scheduler. Cooperative scheduling is used in a few RTOS such as Salvo or TinyOS.", "jobs to be interrupted (paused and resumed later) 39 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily 40 Preemptive scheduling • Previous schedulers (FIFO, SJF) are non-preemptive • Non-preemptive schedulers only switch to other jobs once the current jobs is finished (run-to-completion) OR • Other way: Non-preemptive schedulers only switch to other process if the current process gives up the CPU voluntarily • Preemptive schedulers can take the control of CPU at any time, switching to another process according to the the scheduling policy • OS relies on timer interrupts and context switch for preemptive process/jobs 41 Shortest time to completion first (STCF) • STCF extends the SJF by adding preemption • Any time a new job enters the system: a. STCF scheduler determines which of the remaining jobs (including new job) has the least time left b. STCF then schedules the shortest job first 42 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C) = 10 • Tturnaround(A) = 120 • Tturnaround(B) = (20 - 10) = 10 • Tturnaround(C) = (30 - 10) = 20 Average turnaround time = (120 + 10 + 20) / 3 = 50 0 20 40 60 80 100 120 A B C [B, C arrive] A 43 Shortest time to completion first (STCF) • A runs for 100 seconds, while B and C run 10 seconds • When B and C arrive, A gets preempted and is scheduled after B/C are finished • Tarrival(A) = 0 • Tarrival(B) = Tarrival(C", "on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called a priority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation. In computer networks, especially wireless networks, scheduling algorithms may suffer from scheduling starvation. An example is maximum throughput scheduling. Starvation is normally caused by deadlock in that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into a critical section and picks one arbitrarily is deadlock-free, but not starvation-free. A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses the aging technique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. See also Dining philosophers problem" ]
[ "It can lead to starvation especially for those real-time tasks", "Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.", "Bugs in one process can cause a machine to freeze up", "It can lead to poor response time for processes" ]
['It can lead to starvation especially for those real-time tasks', 'Bugs in one process can cause a machine to freeze up', 'It can lead to poor response time for processes']
45
Select valid answers about file descriptors (FD):
[ "\"Everything is a file\" is an approach to interface design in Unix derivatives. While this turn of phrase does not as such figure as a Unix design principle or philosophy, it is a common way to analyse designs, and informs the design of new interfaces in a way that prefers, in rough order of import: representing objects as file descriptors in favour of alternatives like abstract handles or names, operating on the objects with standard input/output operations returning byte streams to be interpreted by applications (rather than explicitly structured data), and allowing the usage or creation of objects by opening or creating files in the global filesystem name space. The lines between the common interpretations of \"file\" and \"file descriptor\" are often blurred when analysing Unix, and nameability of files is the least important part of this principle; thus, it is sometimes described as \"Everything is a file descriptor\". This approach is interpreted differently with time, philosophy of each system, and the domain to which it's applied. The rest of this article demonstrates notable examples of some of those interpretations, and their repercussions. Objects as file descriptors Under Unix, a directory can be opened like a regular file, containing fixed-size records of (i-node, filename), but directories cannot be written to directly, and are modified by the kernel as a side-effect of creating and removing files within the directory. Some interfaces only follow a subset of these guidelines, for example pipes do not exist on the filesystem — pipe() creates a pair of unnameable file descriptors. The later invention of named pipes (FIFOs) by POSIX fills this gap. This does not mean that the only operations on an object are reading and writing: ioctl() and similar interfaces allow for object-specific operations (like controlling tty characteristics), directory file descriptors can be used to alter path look-ups (with a growing number of *at() system call variants like openat()) or to change the working directory to the one represented by the file descriptor, in both cases preventing race conditions and being faster than the alternative of looking up the entire", "df -- report filesystem disk space df. 40 Benefit of using mount points / bin ls home sanidhya linuxbrew A single name space! ●Uniform access with the same API Important commands ●mount <device> <dir> mount /dev/cdrom /media/cdrom mount -t ext4 /dev/sda5 /home ●df -- report filesystem disk space df. 41 Benefit of using mount points / bin ls home sanidhya linuxbrew A file can be moved efficiently within a filesystem ●Keep the inode and blocks on disk ●Requires a copy+delete across filesystems Command “mv” handles this transparently to users 42 The mount point is an abstraction Built by adding a level of indirection ●Mostly transparent to users ●... except when it is not 43 The mount point is an abstraction A file can be moved efficiently within a filesystem ●Keep the inode and blocks on disk ●Requires a copy+delete across filesystems Command “mv” handles this transparently to users Very different filesystem types can co-exist in the same namespace ●ext3, ext4, NTFS → filesystems optimised for hard disks ●iso96000 → filesystem standards for CDROM ●FAT → the legacy, universal MS-DOS filesystem 44 Benefits of mounting: pseudo filesystems The filesystem abstraction can be used to manage non-persistent content ●tmpfs (/run) -- uses memory; cleared at reboot ●procfs (/proc) -- exposes process state as a set of files. L07.4 file system implementation CS-202 Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU • File system manages data for users • Given: a large set (N) of blocks • Need: data structures to encode file hierarchy and per file metadata • Overhead (metadata vs file data size) should be low • Internal fragmentation should be low • Efficient access of file contents: external fragmentation, # metadata access • Implement file system APIs • Several choices are available (simi", "ering options (setvbuf) • Unix/Linux system calls • open,read, write, lseek • Operate on file descriptors 12 < (3) → library function part of libc > man fread or man 3 fread ●Benefits of using FILE* calls ○ Portability across operating systems ○ Higher-level abstractions such as buffering 13 (3) → library function part of libc $ man fread or man 3 fread ●Benefits of using FILE* calls ○ Portability across operating systems ○ Higher-level abstractions such as buffering (2) → system calls $ man read or man 2 read ●Uses file descriptors ●Same code works for files, pipes, and sockets (covered in networking lectures) < 14 Example L07.2 The file system abstraction CS-202 Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU • Addresses need for long-term information storage: • Store large amounts of information • Do it in a way that outlives the program • Can support concurrent accesses from multiple processes • Presents applications with persistent, named data • Two main components: • Files • Directories 16 File system abstraction • A file is named collection of related information that is recorded in secondary storage • Or, a linear persistent array of bytes • Has two parts: • Data: what a user or application puts in it • Array of bytes • Metadata: Information added and managed by the OS • Size, owner, security information, modification time, etc. 17 The file abstraction: File 1. Inode and device number (persistent ID) 2. Path (human readable) 3. File descriptor (process view) 18 The file abstraction: 3 perspectives Processor Memory Storage IO connection HW Operating system Process Threads Address space Files Sockets • Low-level unique ID assigned to the file by the file system • Note: Inodes are unique for a file system but not globally • Recycled after deletion • An inode contains metadata of a file • Permissions, length, access times • Location of data blocks and indirection blocks • Each file has exactly one associated inode 19 OS view:", "A self-extracting archive (SFX or SEA) is a computer executable program which combines compressed data in an archive file with machine-executable code to extract the information. Running on a compatible operating system, it does not need a suitable extractor in the target computer to extract the data. The executable part of the file is known as a decompressor stub. Self-extracting files are used to share compressed files with a party that may not have the software needed to decompress a regular archive. Users can also use self-extracting archives to distribute their own software. For example, the WinRAR installation program is made using the graphical GUI RAR self-extracting module Default.sfx. Overview Self-extracting archives contain an executable file module, which is used to run uncompressed files from compressed files. The latter does not require an external program to decompress the contents of the self-extracting file and can run the operation itself. However, file archivers like WinRAR can still treat a self-extracting file as if it were any other type of compressed file. By using a file archiver, users can view or decompress self-extracting files they received without running executable code (for example, if they are concerned about viruses). A self-extracting archive is extracted and stored on a disk when executed under an operating system that supports it. Many embedded self-extractors support a number of command-line arguments, such as specifying the target location or selecting only specific files. Unlike self-extracting archives, non-self-extracting archives only contain archived files and must be extracted with a program that is compatible with them. While some formats of self-extracting archives cannot be extracted under another operating system, non-self-extracting ones can usually still be opened using a suitable extractor. This tool will disregard the executable part of the file and extract only the archive resource. The self-extracting executable may need to be renamed to contain a file extension associated with the corresponding packer; archive file formats known to support this include ARJ and ZIP. Typically, self", "not consider that fact dispositive. \"By providing a website with... well-developed search functions, easy uploading and storage possibilities, and with a tracker linked to the website, the accused have incited the crimes that the filesharers have committed,\" the court said in a statement. See also Bandwidth Copyright aspects of hyperlinking and framing Download manager Digital distribution HADOPI law Music download Peer-to-peer Progressive download Sideloading List of download managers (includes tools like Downr.org) References External links Media related to Download icons at Wikimedia Commons" ]
[ "The value of FD is unique for every file in the operating system.", "FD is usually used as an argument for read and write.", "FD is constructed by hashing the filename.", "FDs are preserved after fork() and can be used in the new process pointing to the original files." ]
['FD is usually used as an argument for read and write.', 'FDs are preserved after fork() and can be used in the new process pointing to the original files.']
46
What is the default block size for a traditional file system, e.g. ext3/4?
[ "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3", "the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Size examples 1 bit: Answer to a yes/no question 1 byte: A number from 0 to 255 90 bytes: Enough to store a typical line of text from a book 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes). 1024 bytes = 1 KiB: A block size in some older UNIX filesystems 2048 bytes = 2 KiB: A CD-ROM sector 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size. 4 kB: About one page of text from a novel 120 kB: The text of a typical pocket book 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth) 3 MB: A three-minute song (133 kbit/s) 650–900 MB – a CD-ROM 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024) 32/64/128 GB: Three common sizes of USB flash drives 1 TB: The size of a $30 hard disk (as of early 2024) 6 TB: The size of a $100 hard disk (as of early 2022) 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive 24 TB: The size of $440 (as of early 2024) \"video\" hard disk drive 32 TB: Largest hard disk drive (as of mid-2024) 100 TB: Largest commercially available solid-state drive (as of mid-2024) 200 TB: Largest solid-state drive constructed (prediction for mid-2022) 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives). 1.3" ]
[ "32 bits", "32 bytes", "512 bits", "512 bytes", "4096 bits", "4096 bytes" ]
['4096 bytes']
47
Suppose a file system used only for reading immutable files in random fashion. What is the best block allocation strategy?
[ "at the beginning of the partition Data blocks Data blocks Data blocks Data blocks Data blocks Data blocks Inodes Free lists • One logical superblock per file system • Stores the metadata about the file system • Number of inodes • Number of data blocks • Where the inode table begins • May contain information to manage free inodes/data blocks • Read first when mounting a file system 53 FS superblock • Various ways to allocate data to files: • Contiguous allocation: All bytes together, in order • Linked structure: Blocks ends with the next pointer • File allocation table: Table that contains block references • Multi-level indexed: Tree of pointers • Which approach is better? • Fragmentation, sequential access / random access, metadata overhead, ability to grow/shrink files, large files, small files 54 File allocation • All data blocks of each file is allocated contiguously • Simple: Only need start block and size • Efficient: One seek to read an entire file • Fragmentation: external fragmentation (can be serious) • Usability: User needs to know file’s size at the time of creation • Great for read-only file systems (CD/DVD/BlueRay) 55 File allocation: Contiguous file1 file2 file3 file4 Physical block • Each file consists of a linked list of blocks • Usually first word of each block points to the next block • In the above illustration, showing the next block pointer at the end • The rest of the block is data 56 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and meta", "block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and metadata: Keep linked list information in a single table Instead of storing the next pointer at the end of the block, store all next pointer in a central table 58 File allocation: File allocation table (FAT) File block 0 next File block 1 next File block 2 next File block 3 next File block 4 next 7 8 33 17 4 4 7 8 17 33 Data Metadata Proposed by Microsoft, in late 70s ● Still widely used today ○ Thumb drives, CD ROMs • Separate data and metadata • Space utilization: No external fragmentation • No conflating data and metadata in the same block • Simple: Only need to find the first block of a file • Performance: Poor random access • Overhead: Limited metadata • Many file seeks unless entire FAT is stored in memory: Example: 1TB (240 bytes) disk, 4 KB block size, FAT has 256 million entries 4 bytes per entry → 1 GB of main memory required for FS 59 File allocation: File allocation table (FAT) Have a mix of direct, indirect, double direct, and triple indirect pointers for data 60 File allocation: Multi-level indexing S I I I I I I I I i-node blocks Remaining blocks Inode array ● Inode array is present at a known location on disk ● file number = inode number = inode in the array • Each file is a fixed, asymmetric tree, with fixed sized data blocks (e.g., 4 KB) as its leaves • The root of the tree is the file’s inode, containing: • metadata • A set of 15 pointers • First 12 pointers point to data blocks • Last three point to intermedi", "block Each file consists of a linked list of block Space utilization No external fragmentation Simple Only need to find the first block of a file Performance Random access is slow high seek cost on the disk Implementation Blocks mix data and metadata Overhead One pointer per block metadata is required File allocation Linked block File block file next File block next File block next File block next File block next Physical block Decouple data and metadata Keep linked list information in a single table Instead of storing the next pointer at the end of the block store all next pointer in a central table File allocation File allocation table FAT File block next File block next File block next File block next File block next Data Metadata Proposed by Microsoft in late s Still widely used today Thumb drive CD ROMs Separate data and metadata Space utilization No external fragmentation No conflating data and metadata in the same block Simple Only need to find the first block of a file Performance Poor random access Overhead Limited metadata Many file seek unless entire FAT is stored in memory Example TB byte disk KB block size FAT ha million entry byte per entry GB of main memory required for FS File allocation File allocation table FAT Have a mix of direct indirect double direct and triple indirect pointer for data File allocation Multi level indexing S I I I I I I I I i node block Remaining block Inode array Inode array is present at a known location on disk file number inode number inode in the array Each file is a fixed asymmetric tree with fixed sized data block e g KB a it leaf The root of the tree is the file s inode containing metadata A set of pointer First pointer point to data block Last three point to intermediate block themselves containing pointer pointer to a block containing pointer to data block double indirect pointer triple indirect pointer File structure for multi level indexing File allocation Multi level indexing S I I I I I I I I i node block Remaining block Inode array File metadata I node Data block Indirect block Double indirect block Triple indirect block x KB KB K x KB MB K x K x KB GB K x K x K x KB TB Key idea Tree structure Efficient in finding block High degree Efficient in sequential read Once", ", setting the date for the transition from 512 to 4096 byte sectors as January 2011 for all manufacturers, and Advanced Format drives soon became prevalent. Related units Sectors versus blocks While sector specifically means the physical disk area, the term block has been used loosely to refer to a small chunk of data. Block has multiple meanings depending on the context. In the context of data storage, a filesystem block is an abstraction over disk sectors possibly encompassing multiple sectors. In other contexts, it may be a unit of a data stream or a unit of operation for a utility. For example, the Unix program dd allows one to set the block size to be used during execution with the parameter bs=bytes. This specifies the size of the chunks of data as delivered by dd, and is unrelated to sectors or filesystem blocks. In Linux, disk sector size can be determined with sudo fdisk -l | grep \"Sector size\" and block size can be determined with sudo blockdev --getbsz /dev/sda. Sectors versus clusters In computer file systems, a cluster (sometimes also called allocation unit or block) is a unit of disk space allocation for files and directories. To reduce the overhead of managing on-disk data structures, the filesystem does not allocate individual disk sectors by default, but contiguous groups of sectors, called clusters. On a disk that uses 512-byte sectors, a 512-byte cluster contains one sector, whereas a 4-kibibyte (KiB) cluster contains eight sectors. A cluster is the smallest logical amount of disk space that can be allocated to hold a file. Storing small files on a filesystem with large clusters will therefore waste disk space; such wasted disk space is called slack space. For cluster sizes which are small versus the average file size, the wasted space per file will be statistically about half of the cluster size; for large cluster sizes, the wasted space will become greater. However, a larger cluster size reduces bookkeeping overhead and fragmentation, which may improve reading and writing speed overall. Typical cluster sizes range from 1 sector (512 B) to 128 sectors (64 KiB). A cluster need not be", "a free block no allocation policy All that is required is a pointer to the border between the allocated and free area of from space Therefore allocation in a copying GC is a fast a stack allocation Forwarding pointer Objects must be copied to to space only once This is obtained by storing a forwarding pointer in the from space version of the object once it ha been copied checking for the presence of a forwarding pointer when visiting an object and copying it if no forwarding pointer is found using the forwarding pointer otherwise Cheney's copying GC Copying can be done by depth first traversal of the reachability graph but this can lead to stack overflow Cheney s copying GC doe a breadth first traversal of the reachability graph requires only one pointer a additional state Cheney's copying GC Breadth first traversal requires remembering the set of object that have been visited but whose child haven't been visited Cheney's observation This set can be represented a a pointer into to space called scan that partition pointer to object that have been visited and pointer to object that haven't been visited Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From To R R R R Cheney's copying GC scan free From" ]
[ "Linked-list allocation", "Continuous allocation", "Index allocation with B-tree", "Index allocation with Hash-table" ]
['Continuous allocation']
50
Which of the following operations would switch the user program from user space to kernel space?
[ "'s basic functions, such as scheduling processes and controlling peripherals. In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times. The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor. Kernel Program The kernel's main purpose is to manage the limited resources of a computer: The kernel program should perform process scheduling, which is also known as a context switch. The kernel creates a process control block when a computer program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency. The kernel program should perform memory management. When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes", "printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer", "memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable. To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address. The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory. The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes. The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files. The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors,", "return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does the OS stop running a process and switch to another one, which is required for virtualizing the CPU? 19 Basic technique: Limited direct execution OS Program Create an entry for process list Allocate memory for program Load program into memory Set up stack with argc/argv Clear registers Execute main() function Run main() Execute return from main() Free the memory of process Remove entry from process list Time Problem #2 Control process execution on a CPU How does the OS stop running a process and switch to another one, which is required for virtualizing the CPU? 20 Limited direct execution: Two problems! Problem #1 Restricted operations How does the OS ensure that a process does not execute/run a privileged code, while running it efficiently? 21 Limited direct execution with dual mode Operating system kernel mode executes User process executes Kernel mode User mode CPU can execute only regular instructions Can execute regular and privileged instructions 22 Different names for different architectures A simplified table* not accounting for hardware virtualization and legacy modes Architecture Kernel mode User mode x86-64 Ring 0 Ring 3 Arm EL1 EL0 RISC-V S-mode U-mode 23 Privileged instructions Common examples of privileged instructions: ●Change the MMU register that controls page tables (mov %cr3 on x86-64) ●Enable or disable interrupts ●Access I/O devices ●Change privilege levels ●... When the CPU attempts to execute a privileged instruction from user mode: ●The instruction does not execute ●The instruction traps (#General protection fault on x86) ●The operating system takes control L03.3: System calls CS202 - Computer Systems Lectures slides adapted from the OS courses from Cornell, EPFL, IITB, UCB, UMASS, and UU Question How can a process request (from the OS) for operations that are only possible in the kernel mode (example: IO requests)? 25 26 Requesting OS services (user mode → kernel mode) • Processes can request OS services through the system call API (example: fork/exec/wait) • System calls transfer execution to", "arity of control that the operating environment has over privileges for an individual process. In practice, it is rarely possible to control a process's access to memory, processing time, I/O device addresses or modes with the precision needed to facilitate only the precise set of privileges a process will require. The original formulation is from Jerome Saltzer: Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job. Peter J. Denning, in his paper \"Fault Tolerant Operating Systems\", set it in a broader perspective among \"The four fundamental principles of fault tolerance\". \"Dynamic assignments of privileges\" was earlier discussed by Roger Needham in 1972. Historically, the oldest instance of (least privilege) is probably the source code of login.c, which begins execution with super-user permissions and—the instant they are no longer necessary—dismisses them via setuid() with a non-zero argument as demonstrated in the Version 6 Unix source code. Implementation The kernel always runs with maximum privileges since it is the operating system core and has hardware access. One of the principal responsibilities of an operating system, particularly a multi-user operating system, is management of the hardware's availability and requests to access it from running processes. When the kernel crashes, the mechanisms by which it maintains state also fail. Therefore, even if there is a way for the CPU to recover without a hard reset, security continues to be enforced, but the operating system cannot properly respond to the failure because it was not possible to detect the failure. This is because kernel execution either halted or the program counter resumed execution from somewhere in an endless, and—usually—non-functional loop. This would be akin to either experiencing amnesia (kernel execution failure) or being trapped in a closed maze that always returns to the starting point (closed loops). If execution picks up after the crash by loading and running trojan code, the author of the trojan code can usurp control of all processes. The principle of least privilege forces code to run with the lowest privilege/permission level possible. This means that the code that resumes the code execution-whether trojan" ]
[ "Dividing integer by 0.", "Calling sin() in math library.", "Invoking read() syscall.", "Jumping to an invalid address." ]
['Dividing integer by 0.', 'Invoking read() syscall.', 'Jumping to an invalid address.']
61
Which flag prevents user programs from reading and writing kernel data?
[ "s was done without Institutional Review Board (IRB) approval. Despite undergoing review by the conference, this breach of ethical responsibilities was not detected during the paper's review process. This incident sparked criticism from the Linux community and the broader cybersecurity community. Greg Kroah-Hartman, one of the lead maintainers of the kernel, banned both the researchers and the university from making further contributions to the Linux project, ultimately leading the authors and the university to retract the paper and issue an apology to the community of Linux kernel developers. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers.", "vilniusquartet.com/\" \"Mozilla/55 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55\" 14.76.5.24 - [16/Jan/2018:02:19:06 +0200] \"GET /css/style.css HTTP/1.1\" 200 2480 \"http://www.krom.org/\" \"Mozilla/55 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55\" 132.29.235.184 - [16/Jan/2018:03:56:13 +0200] \"GET /vvk/ HTTP/1.1\" 200 5073 \"-\" \"Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1 (KHTML, like Gecko) Version/9.0 Mobile/13B143 Safari/601.1\" Logs of an Apache web server com-402 - Netops & Secops - Protecting History (logging) 40 Things you should not log Passwords! source: Bleeping computer com-402 - Netops & Secops - Protecting History (logging) 41 Things you should not log Swiss federal act on data protection requires strict security mechanisms for log containing sensitive personal information religious, ideological, political or trade union-related views or activities, health, the intimate sphere or the racial origin, social security measures, administrative or criminal proceedings and sanctions; Basically, the content of potentially private e-mail and Internet access logs can contain sensitive information Internet access logs should only be generated in an anonymous way. nominal analysis of Internet access is only allowed if there are tangible signs of abuse Mailboxes and logs should be protected against unauthorized access ProtectingData(backups) Netops & Secops com-402 - Netops & Secops - Protecting Data (backups) 42 Backups source: Gitlab Timeline 2017/01/31 6pm UTC: Spammers are hammering Git- Lab’s database, causing a lockup. 2017/01/31 10pm UTC: DB replication effectively stops. 2017/01/31 11pm-ish UTC: team-member-1 starts re- moving db1.cluster.gitlab.com by accident. 2017/01/31 11:27pm UTC: team-member", "printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer", "only affects performance – Memory view is split into two views: control and data plane ∗The control plane is a view that only contains code pointers (and transitively all related pointers) ∗The data plane contains only data, code pointers are left empty (void/unused data) – The two planes must be seperated and data in the control plance must be protected from pointer dereferences in the data plane – CPI protects pointers and sensitive pointers ∗CPI enforces memory safety for select data • Sandboxing – Kernel isolates process memory ∗The kernel provides the most well known form of sandboxinng ∗Process are sandboxed and connot access privileged intructions directly ∗To access resources, they must fo through a system call that elevated privileges and asks the kernel to handle the access ∗The kernel can then enforce security, fairness, access policies ∗Sandboxing in enable through HW, namely different privileges – chroot / containers isolate process from each other ∗Containers are lighweight form of virtualization ∗They isolate a group of processes from all other processes ∗Root us restricted to the container but not the full system ∗Sandboxing powered in SW, through kernel data structures – seccomp restricts process from interacting with the kernel ∗Seccomp restricts system calls and parameters accessible by a single process ∗Processes are sandboxed based on a policy ∗In the most constrained case, the allowed system calls are only : read, write, close, exit, sigreturn ∗Sandboxing powered in SW, through kernel data structures – Software absed Fault Isolation isolated components in a process ∗SFI restricts code execution/data access inside a single process ∗Application and untrusted code run in the same address space ∗The untrusted code may only read/write the untrusted data segment ∗Sandboxing is enabled through SW instrumentation Finding bugs Testing • Testing is the process of analyzing a program to find errors • An error is a deviation between observed bahaviour and specified behaviour • “Testing can only show the", "widely-used operating system component without Institutional Review Board (IRB) approval. The paper was accepted and was scheduled to be published, however, after criticism from the Linux kernel community, the authors of the paper retracted the paper and issued a public apology. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers. History The conference was initially conceived by researchers Stan Ames and George Davida in 1980 as a small workshop for discussing computer security and privacy. This workshop gradually evolved into a larger gathering within the field. Held initially at Claremont Resort, the first few iterations of the event witnessed a division between cryptographers and systems security researchers. Discussions during these early iterations predominantly focused on theoretical research, neglecting practical implementation considerations. This division persisted, to the extent that cryptographers would often leave sessions focused on systems security topics. In response, subsequent iterations of the conference integrated panels that encompassed both cryptography and systems security discussions within the same sessions. Over time, the conference's attendance grew, leading to a relocation to San Francisco in 2011 due to venue capacity limitations. Structure IEEE Symposium on Security and Privacy considers papers from a wide range of topics related to computer security and privacy. Every year, a list of topics of interest is published by the program chairs of the conference which changes based on the trends in the field. In past meetings, IEEE Symposium on Security and Privacy have considered papers from topics like web security, online abuse, blockchain security, hardware security, malware analysis and artificial intelligence. The conference follows a single-track model for its proceedings, meaning only one session takes place at any given time. This approach deviates from the multi-track format commonly used in other security and privacy conferences, where multiple sessions on different topics run concurrently. Papers submitted for consideration to the conference reviewed using a double-blind process to ensure fairness. However, this model constrains the conference in the number of papers it can accept, resulting in a low acceptance rate often in the single digits, unlike" ]
[ "PTE_P", "PTE_U", "PTE_D", "PTE_W" ]
['PTE_U']
62
In JOS, after finishing the execution of a user-level page fault handler, how is the program control flow transferred back to the program? (You may get insights from the code snippet of _pagefault_upcall.)
[ "s call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts. Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for better static analysis of programs. References External links Official website", "instruction for debugging Page fault Instruction page fault Code Indicates a fault during instruction fetch due to virtual memory issue Load page fault Code Raised during a load operation when a page related fault occurs Store page fault Code Raised during a store operation when a page related fault occurs Understanding and properly handling these interrupt and exception is crucial for effective RISC V programming and system design Possible Undefined Instruction Handler Below is a possible implementation of an undefined instruction handler in RISC V assembly Notes by Ali EL AZDI RISC V Machine Mode Interrupt Handling In RISC V architecture machine mode interrupt handling is managed through three key control and status register mie mip and mstatus These register play distinct role in enabling monitoring and controlling interrupt mie Machine Interrupt Enable This register determines which interrupt the processor can take and which it must ignore Key bit include MEIE Enables machine level external interrupt MTIE Enables machine level timer interrupt mip Machine Interrupt Pending This register list the interrupt that are currently pending Key bit include MEIP Indicates a pending machine level external interrupt MTIP Indicates a pending machine level timer interrupt mstatus Machine Status This register contains the global interrupt enable flag and other state information Important field include MIE Globally enables interrupt when set to and disables them when set to MPIE Holds the value of MIE prior to a trap The diagram below illustrates the structure of these register These register provide the foundation for interrupt handling in machine mode ensuring efficient and precise interrupt management The Stack Problem A few week ago we discussed a potential issue with the stack that wa What should we do when the stack hit it limit We might be able to find a solution to this problem now CHAPTER PART II D PROCESSOR I OS AND EXCEPTIONS W Stack Full Detection To detect when the stack is full we can use a watchpoint Writing Handlers is Very Very Tricky To write the exception handler for the stack full detection we cannot use the stack Writing interrupt or exception handler is inherently complex particularly due to the restriction that the stack cannot be used Additionally many register may be untouchable during execution This necessitates careful design to handle these constraint Challenges Stack usage Direct stack usage is prohibited necessitating alternative storage mechanism Register constraint In many case touching any general purpose register is disallowed Solutions", "TE “read-only” in parent and child trees; increment reference count of all frames ● Later, handle page faults due to disallowed write ● If the refcount >1, allocate a new frame, copy content; decrement original refcount; update PTE as writable with new PFN ● If the refcount = 1; update PTE as writable ● Invalidate corresponding TLB entry 57 Swapping: when main memory runs out • Observation: Main memory may not be enough for all memory of all processes Working set: Amount of memory that process needs at a given point in time. Can vary! • Idea: Store unused pages on the disk • Allows the OS to reclaim memory when necessary • Allows the OS to over-provision (hand out more memory than physically available) • When needed, the OS finds and pushes unused pages to disk • OS can create a special file or designate a region on the disk to store unused pages 58 Swapping: page fault • MMU translates virtual to physical addresses using the OS provided data structures (page tables) • The present bit for each page table entry at each level indicates if the reference is valid • MMU checks present bit during translation • If a page is not present then MMU triggers a page fault (exception) • OS then enforces its policy to handle the page fault 59 Swapping: page fault handling • Page fault handler checks where the fault occurred: • Which process? (locate data structure) • What address? (Search page in page table) • If the page is on the disk, OS issues a request to load the faulted page and tells the scheduler to switch to another process • If the page is not swapped out, the OS creates the mapping and updates data structures • OS then resumes the faulting process by re-executing the faulting instruction 60 Swapping out • OS mechanism is straightforward: • Copy the victim page to disk • Keep track of the on-disk location in the process structure • Invalidate the PTE (all PTEs) that point to the victim page • OS policy is much more complex • Selecting the wrong victims can have catastrophic performance impact (thrashing) • Selecting “good” victims - prediction based", "unknown to the adversary Stack canary are inserted in the stack helping to detect overflow attack Windows also us safe exception handler which aim at keeping the system safe even after error This countermeasure make sure that after an error there is no undefined behavior but the system only can execute a pre defined set of error handling function Bugs prevention is hard Current system deploy known countermeasure that have reasonable impact on performance These countermeasure however are far from ensuring that there are no vulnerability We will now study another approach to increase the security of software Software testing which can be used to find bug and fix then them Software testing executes code under different circumstance with the goal of finding configuration that raise an error An error is a deviation between how we expect the program would function and what actually happens This can be an error regarding functionality the program doe not provide the expected result and error regarding operation the program crash is too slow even never terminating But what about security We have learned in the first lecture of the course that testing for security is hard It cannot provide absence of bug Still finding a many bug a possible help increasing the safety of software Ideally we would like to test all possible Control flow all possible path through the program i e all possible outcome of branch in a program if else clause for clause while clause etc Data flow all possible value for the variable location that are used by the program Of course testing all possible path and data value is impossible these are too many state Here we have an example program The value a and a cover all flow The former implies that a a is True and the instruction within the if is executed The latter implies that a a is False and the instruction within the if is not executed However even all statement are executed and both flow are explored not all data flow are considered In particular the data flow a is not considered but is the one that would raise a bug a in this case a a would be True but x is not reserved for the program x ha position starting in position Thus a would make the program crash There are two way of testing for security property Manual review in which the test to be carried out is defined by a human trying to identify corner case that may appear in reality Whether these corner case could trigger a bug can be investigated by code review in which human read each others code to search for programming error or by implementing test case so that the check can be", "program. For that, the OS needs to create a new process and create a new address space to load the program Let’s divide and conquer: • fork() creates a new process (replica) with a copy of its own address space • exec() replaces the old program image with a new program image fork() exec() exit() wait() Why do we need fork() and exec()? 38 Multiple programs can run simultaneously Better utilization of hardware resources Users can perform various operations between fork() and exec() calls to enable various use cases: • To redirect standard input/output: • fork, close/open file descriptors, exec • To switch users: • fork, setuid, exec • To start a process with a different current directory: • fork, chdir, exec fork() exec() exit() wait() Why do we need fork() and exec()? open/close are special file-system calls Set user ID (change user who can be the owner of the process) Go to a specified directory 39 wait(): Waiting for a child process • Child processes are tied to their parent • There exists a hierarchy among processes on forking A parent process uses wait() to suspend its execution until one of its children terminates. The parent process then gets the exit status of the terminated child pid_t wait (int *status); • If no child is running, then the wait() call has no effect at all • Else, wait() suspends the caller until one of its children terminates • Returns the PID of the terminated child process fork() exec() exit() wait() 40 exit(): Terminating a process When a process terminates, it executes exit(), either directly on its own, or indirectly via library code void exit (int status); • The call has no return value, as the process terminates after calling the function • The exit() call resumes the execution of a waiting parent process fork() exec() exit() wait() Waiting for children to die... 41 • Scenarios under which a process terminates • By calling exit() itself • OS terminat" ]
[ "The control flow will be transferred to kernel first, then to Env that caused the page fault.", "The control flow will be transferred to Env that caused the page fault directly." ]
['The control flow will be transferred to Env that caused the page fault directly.']
64
What is the content of the superblock in the JOS file system?
[ "JEAN was a dialect of the JOSS programming language developed for and used on ICT 1900 series computers in the late 1960s and early 1970s; it was implemented under the MINIMOP operating system. It was used at universities including the University of Southampton. The name was an acronym derived from \"JOSS Extended and Adapted for Nineteen-hundred\". It was operated interactively from a Teletype terminal, as opposed to using batch processing. JEAN programs could include expressions (such as A*(B+C)), commands (such as TYPE to display the result of a calculation) and clauses (such as FOR, appended to an expression to evaluate it repeatedly).", "at the beginning of the partition Data blocks Data blocks Data blocks Data blocks Data blocks Data blocks Inodes Free lists • One logical superblock per file system • Stores the metadata about the file system • Number of inodes • Number of data blocks • Where the inode table begins • May contain information to manage free inodes/data blocks • Read first when mounting a file system 53 FS superblock • Various ways to allocate data to files: • Contiguous allocation: All bytes together, in order • Linked structure: Blocks ends with the next pointer • File allocation table: Table that contains block references • Multi-level indexed: Tree of pointers • Which approach is better? • Fragmentation, sequential access / random access, metadata overhead, ability to grow/shrink files, large files, small files 54 File allocation • All data blocks of each file is allocated contiguously • Simple: Only need start block and size • Efficient: One seek to read an entire file • Fragmentation: external fragmentation (can be serious) • Usability: User needs to know file’s size at the time of creation • Great for read-only file systems (CD/DVD/BlueRay) 55 File allocation: Contiguous file1 file2 file3 file4 Physical block • Each file consists of a linked list of blocks • Usually first word of each block points to the next block • In the above illustration, showing the next block pointer at the end • The rest of the block is data 56 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 • Each file consists of a linked list of blocks • Space utilization: No external fragmentation • Simple: Only need to find the first block of a file • Performance: Random access is slow → high seek cost (on the disk) • Implementation: Blocks mix data and metadata • Overhead: One pointer per block metadata is required 57 File allocation: Linked blocks File block 0 file1 next File block 1 next File block 2 next File block 3 next File block 4 next Physical block 7 8 33 17 4 Decouple data and meta", "the OS course from Cornell EPFL IITB UCB UMASS and UU File system manages data for user Given a large set N of block Need data structure to encode file hierarchy and per file metadata Overhead metadata v file data size should be low Internal fragmentation should be low Efficient access of file content external fragmentation metadata access Implement file system APIs Several choice are available similar to virtual memory File system implementation File system is stored on disk Disk can be divided into one or more partition Sector of disk master boot record MBR which contains Bootstrap code loaded and executed by the firmware Partition table address of where partition start and end First block of each partition ha a boot block Loaded by executing code in MBR and executed on boot File system layout Partition Partition Partition Boot block Superblock Free space management Inodes Files and directory Entire disk MBR Partition table Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Data block Data block Data block Data block Data block Data block Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D", "posites called 'Items' which are the unit of storage and retrieval. Higher-level structures that combine these Items are client-devised, and include for example unlimited size records of an unlimited number of columns or attributes, with complex attribute values of unlimited size. Keys may then be a composition of components. Attribute values can be ordered sets of composite components, character large objects (CLOB's), binary large objects (BLOB's), or unlimited sparse arrays. Other higher-level structures built of multiple Items include key/value associations like ordered maps, ordered sets, Entity-Attribute-Value nets of quadruples, trees, DAG's, taxonomies, or full-text indexes. Mixtures of these can occur along with other custom client-defined structures. Any ItemSpace may be represented as an extended JSON document, and JSON printers and parsers are provided. JSON documents are not native but are mapped to sets of Items when desired, at any scale determined by an Item prefix that represents the path to the sub-document. Hence, the entire database or any subtree of it down to a single value can be represented as extended JSON. Because Items are always kept sorted, the JSON keys of an object are always in order. Data encoding An 'ItemSpace' represents the entire database, and it is a simple ordered set of Items, with no other state. An Item is actually stored with each component encoded in variable-length binary form in a char array, with components being self-describing in a standard format which sorts correctly. Programmers deal with the components only as primitives, and the stored data is strongly typed. Data is not stored as text to be parsed with weak typing as in JSON or XML, nor is it parsed out of programmer-defined binary stream representations. There are no custom client-devised binary formats that can grow brittle, and which can have security, documentation, upgrade, testing, versioning, scaling, and debugging problems, such as is the case with Java Object serialization. Performance", "D D D D D D D D D Data block Data block Data block Data block Data block Data block Peeking inside a partition storage block Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Data block I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Data block Data block Data block Data block Data block Data block Inodes Data block Peeking inside a partition storage block i d I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Bitmap tracking free inodes and data block free list Data block Data block Data block Data block Data block Data block Inodes Free list Data block Peeking inside a partition storage block B S i d I I I I I D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D Persistent storage modeled a a sequence of N block From to N block each of KB Some block store data Other block store metadata An array of inodes At byte per block with block for inodes file system can have up to file Bitmap tracking free inodes and data block free list Boot block and superblock are at the beginning of the partition Data block Data block Data block Data block Data block Data block Inodes Free list One logical superblock per file system Stores the metadata about the file system Number of" ]
[ "List of all directories", "List of all files", "List of all blocks", "List of all inodes", "Total number of blocks on disk", "Magic number identifying the file system", "Node with the root directory ('/')" ]
['Total number of blocks on disk', 'Magic number identifying the file system', "Node with the root directory ('/')"]
69
In which of the following cases does the TLB need to be flushed?
[ "walks” page table 49 Translation lookaside buffer (TLB) A cache of recent virtual address to physical address mappings Translating virtual address to physical address: 1. MMU first looks up TLB 2. If TLB hit: physical address can be directly used 3. Only if TLB miss: MMU “walks” page table • TLB misses are expensive (multiple memory accesses) • Locality of reference helps to have high hit rate • TLB entries may become invalid on context switch and change of page tables 50 TLB: memory access cost Page table level With TLB & TLB hit Without TLB No paging 1 1 level 1 2 2 level 1 3 3 level 1 4 • Assume we have 64-bit address space; and all page table levels are cached when TLB is present (i.e., TLB hit) • Number of memory accesses to read/write at memory location X (a process accessing a virtual address X) Key: the TLB is NOT in memory, but rather a special circuit 51 How does the CPU execute a read/write operation? • CPU issues a load for a virtual address (as part of a memory load/store) • MMU checks TLB for virtual address • TLB miss: MMU executes page walk • Page table entry (PTE) is not present: page fault, switch to the OS, which raises segfault • PTE is present: update TLB, continue • TLB hit: obtain physical address, fetch memory location and return to CPU • Note: TLB also checks for the protection bit 52 TLB Invalidations Page tables are in physical memory ●PTBR (base register) is a PFN ●In a multi-level tree, each outer PTE points to the PFN of the next level entry ●The outer PTE hold the PFN used by the CPU changes on context switch HW invalidates TLB when PTBR changes changes on page faults and other conditions TLB is inside the CPU, ●a special circuit called a content-addressable memory (CAM) ●Copied from the outer PTE after the (expensive) walk OS must (selectively) invalidate TLB after changing PTE entries 53 Summary - page tables What is the typical content of the page table? • Page table entries (P", "BJ Flush CS 307 – Fall 2018 Lec.08 - Slide 80 TM Recovery u Run from recovery code ROB BR LD/ST queue CS 307 – Fall 2018 Lec.08 - Slide 81 TM Implementations u Several decades worth of research o Originally most done in software o Software overhead for detecting conflicts is high o Many HW and HW/SW hybrids have emerged o See the book by Rajwar & Larus u Real implementations in Intel, IBM CPUs o Keep speculative data in cache hierarchies (not just pipeline) CS 307 – Fall 2018 Lec.08 - Slide 82 Extension: Data in Caches u Recall: we assumed addresses tracked in LSQ o How can we extend that to storing it in the caches and store buffer? u Simple idea: add some bits to mark certain cache lines as speculative o Same coherence mechanism to detect conflicts Cache Tag Coherence State Speculative Bit CS 307 – Fall 2018 Lec.08 - Slide 83 Detection Policy u Check for conflict at every operation o Use coherence actions (e.g., BusRd, BusRdX, BusInv) o Intuition: “I suspect conflicts might happen, so always check to see if one has occurred upon each coherence operation.” u There are other options, TM open research area o See me if you are interested! CS 307 – Fall 2018 Lec.08 - Slide 84 A note on Software Transactional Memory u SW for speculation, buffering and detection in o No hardware support o Huge in the academic community o Mostly a research testbed before HTM emerged o Too slow for real-world deployment CS 307 – Fall 2018 Lec.08 - Slide 85 Summary u HW speculation can simplify software problems o Instead of focusing on finest grain locking (hard to program), place it conservatively and HW can elide u HW can enable declarative concurrency control o Transforms implementation (locks) into intention (transactions) o Uses the same hardware as lock elision o Programs can more easily manage concurrency u Open problems: detection, recovery policies", "send [R] to all forall pj wait until either recieve [R, (snj, idj), vj] or suspect pj v = vj with the highest (snj, idj) (sn, id) = highest (snj, idj) send [W, (sn, id), v] to all forall pj wait until either receive [W, (sn, id), ack] or detect [pj] return v At pi T1 : when receive [W] from pj send [W, sn] to pj when receive [R] from pj send [R, (sn, id), vi] to pj T2 : when receive [W, (snj, idj), v] from pj if (snj, idj) > (sn, id) then vi = v (sn, id) = (snj, idj) send [W, (sn, id), ack] to pj when receive [W, (snj, idj), v] from pj if (snj, idj) > (sn, id) then vi = v (sn, id) = (snj, idj) send [W, (sn, id), ack] to pj • From fail-stop to fail-silent – We assume a mojority of correct processes – In the 1-N algorithm, the writer writes in a majority using a timestamp determined locally and the reader selects a value from a majority and then imposes this value on a majority – In the N-N algorithm, the writers determines first the timestamp using a majority Terminating Reliable Broadcast (trb) • Like reliable broadcast, terminating reliable broadcast (TRB) is a communication primitive used to disseminate a message among a set of processes in a reliable way • TRB is however strictly stronger than (uniform) reliable broadcast • Like with reliable broadcast, correct processes in TRB agree on the set of messages they deliver • Like with (uniform) reliable broadcast, every correct process in TRB delivers every message delivered by any correct process • Unlike with reliable broadcast, every correct process delivers a message, event if the broadcaster crashes 11 • The", "∗ {\\displaystyle \\ast } Implication by the operation ⇒ {\\displaystyle \\Rightarrow } (which is called the residuum of ∗ {\\displaystyle \\ast } ) Weak conjunction and weak disjunction by the lattice operations ∧ {\\displaystyle \\wedge } and ∨, {\\displaystyle \\vee,} respectively (usually denoted by the same symbols as the connectives, if no confusion can arise) The truth constants zero (top) and one (bottom) by the constants 0 and 1 The equivalence connective is interpreted by the operation ⇔ {\\displaystyle \\Leftrightarrow } defined as x ⇔ y ≡ ( x ⇒ y ) ∧ ( y ⇒ x ) {\\displaystyle x\\Leftrightarrow y\\equiv (x\\Rightarrow y)\\wedge (y\\Rightarrow x)} Due to the prelinearity condition, this definition is equivalent to one that uses ∗ {\\displaystyle \\ast } instead of ∧, {\\displaystyle \\wedge,} thus x ⇔ y ≡ ( x ⇒ y ) ∗ ( y ⇒ x ) {\\displaystyle x\\Leftrightarrow y\\equiv (x\\Rightarrow y)\\ast (y\\Rightarrow x)} Negation is interpreted by the definable operation − x ≡ x ⇒ 0 {\\displaystyle -x\\equiv x\\Rightarrow 0} With this interpretation of connectives, any evaluation ev of propositional variables in L uniquely extends to an evaluation e of all well-formed formulae of MTL, by the following inductive definition (which generalizes Tarski's truth conditions), for any formulae A, B, and any propositional variable p: e ( p ) = e v ( p ) e ( ⊥ ) = 0 e ( <unk> ) = 1 e ( A <unk> B ) = e ( A ) ∗ e ( B ) e ( A → B ) = e ( A ) ⇒ e ( B ) e ( A ∧ B ) = e ( A ) ∧ e ( B ) e ( A ∨ B ) = e ( A ) ∨ e ( B", ", then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then R = T1→R2 for some R2 with Γ, x:T1 <unk> t2 : R2. Inversion Lemma: 1. If Γ <unk>true : R, then R = Bool. 2. If Γ <unk>false : R, then R = Bool. 3. If Γ <unk>if t1 then t2 else t3 : R, then Γ <unk>t1 : Bool and Γ <unk>t2, t3 : R. 4. If Γ <unk>x : R, then x:R ∈Γ. 5. If Γ <unk>λx:T1.t2 : R, then R = T1→R2 for some R2 with Γ, x:T1 <unk> t2 : R2. 6. If Γ <unk>t1 t2 : R" ]
[ "Inserting a new page into the page table for a user-space application.", "Deleting a page from the page table.", "Changing the read/write permission bit in the page table.", "Inserting a new page into the page table for kernel." ]
['Deleting a page from the page table.', 'Changing the read/write permission bit in the page table.']
71
Select all valid answers about UNIX-like shell.
[ "le. As all subjects assigned to a role are the same to the system, the system does not have the means to see if there are one or two users to enforce the separation of privilege. Another option, instead of looking at similarity between users, is to look at of permissions that are often needed together to run the system. These permissions are then put together in so‐called groups. Sometimes it makes sense for a subject to belong to a group, but this subject may not be allowed to access one of the resources in the group by the security policy. In this case we can implement what are called negative permissions, which indicate that a particular subject does not have a particular permission on an object. For instance in the example Alice needs access to file2 and file3, so group1 makes sense for her; but she should not read or write on file1. Instead of creating a new group, or breaching the security policy, we can add a negative permission that indicates that Negative permissions should always be tested first. If there is a negative permission, there is no need to check anything else. It guarantees that there is no error when checking and obtaining some positive permission (fail safe: if something is incorrect, the subject cannot access). In UNIX systems principals are users. Each user has an identity UID. There are some reserved UIDs, which we will see in the following slides. Users belong to groups, with identity GID. User accounts are defined in a file /etc/passwd. Each line defines a user as username:password:UID:GID:info:home:shell info is a comment field that can contain some information about the user; home, the absolute path to the directory where the user will appear when they log in; and shell the absolute path to the default shell of the user If users belong to more groups, those appear in the file /etc/group As in any group‐based access control, each group the user belongs to provides new permissions to the user. In UNIX, everything is a file. Files are created by users (they can be created by the root user) The system uses Discretionary access control. Each user owns their files and has access to them. UNIX has a very simple way of defining who else has access by defining three groups: owner:", "; and shell the absolute path to the default shell of the user If users belong to more groups, those appear in the file /etc/group As in any group‐based access control, each group the user belongs to provides new permissions to the user. In UNIX, everything is a file. Files are created by users (they can be created by the root user) The system uses Discretionary access control. Each user owns their files and has access to them. UNIX has a very simple way of defining who else has access by defining three groups: owner: the group is formed just by the file owner group: the file’s group other: anyone that is not the owner or on one of the owner’s group(s) The diference between sudo and su is that sudo – executes one action as super user su – changes the current user to another user. If there is no argument, it changes to root, the super user <unk>doing this is very dangerous, as any action you would realize would not undergo security checks To allow users to access systems files and services, there is the suid mechanism that we will see in a couple of slides Permission bits (see next slide) provide permissions for the 3 groups in UNIX: the user, the user’s group,others. The three permissions are read, write (modify or delete), execute When the file represents a directory the permissions change semantics: ‐ Read ‐> the user has the right to list the files inside the directory ‐ Write ‐> the user has the right to create a file in the directory (by creating a new file or moving a file ‐ Execute ‐> the user has the right to “move into” the directory, i.e., to execute the command “cd” Besides the 9 permission bits, there can be three attributes: suid/sgid – see slide below sticky bit – it only applies to directories, and it indicates that 1) the directory can only be deleted by the directory owner or the super user 2) files in the directory can only be renamed by the directory owner or the super user The sticky bit is on in /tmp, a folder share by all users, so that only owners or super users can", "for joining files horizontally Strip (Unix) – Shell command for removing non-essential information from executable code files References External links strings – Shell and Utilities Reference, The Single UNIX Specification, Version 5 from The Open Group strings(1) – Plan 9 Programmer's Manual, Volume 1 strings(1) – Inferno General commands Manual", "Brian W. Kernighan 1996 The Software Tools Users Group (Dennis E. Hall, Deborah Scherrer, Joe Sventek) 1995 The Creation of USENET by Jim Ellis, Steven M. Bellovin, and Tom Truscott 1994 Networking Technologies 1993 Berkeley UNIX See also AUUG LISA (conference) Marshall Kirk McKusick LISA SIG: Formerly SAGE (organization) Unix References External links USENIX: The Advanced Computing Systems Association Official USENIX YouTube Channel", "Brian W. Kernighan 1996 The Software Tools Users Group (Dennis E. Hall, Deborah Scherrer, Joe Sventek) 1995 The Creation of USENET by Jim Ellis, Steven M. Bellovin, and Tom Truscott 1994 Networking Technologies 1993 Berkeley UNIX See also AUUG LISA (conference) Marshall Kirk McKusick LISA SIG: Formerly SAGE (organization) Unix References External links USENIX: The Advanced Computing Systems Association Official USENIX YouTube Channel" ]
[ "The shell is a program, that runs in user-space.", "The shell is a program, that runs in kernel-space.", "The shell is a program, which reads from standard input.", "The shell is a function inside kernel.", "The shell is the layer, which has to be always used for communicating with kernel.", "The shell must run only in a single instance. Multiple running instances cause memory corruption.", "The shell is a user interface for UNIX-like systems." ]
['The shell is a program, that runs in user-space.', 'The shell is a program, which reads from standard input.', 'The shell is a user interface for UNIX-like systems.']
74
In x86, select all synchronous exceptions?
[ "Automatic mutual exclusion is a parallel computing programming paradigm in which threads are divided into atomic chunks, and the atomic execution of the chunks automatically parallelized using transactional memory. References See also Bulk synchronous parallel", "synchronous\", serial link. If you have an external modem attached to your home or office computer, the chances are that the connection is over an asynchronous serial connection. Its advantage is that it is simple — it can be implemented using only three wires: Send, Receive and Signal Ground (or Signal Common). In an RS-232 interface, an idle connection has a continuous negative voltage applied. A 'zero' bit is represented as a positive voltage difference with respect to the Signal Ground and a 'one' bit is a negative voltage with respect to signal ground, thus indistinguishable from the idle state. This means you need to know when a 'one' bit starts to distinguish it from idle. This is done by agreeing in advance how fast data will be transmitted over a link, then using a start bit to signal the start of a byte — this start bit will be a 'zero' bit. Stop bits are 'one' bits i.e. negative voltage. Actually, more things will have been agreed in advance — the speed of bit transmission, the number of bits per character, the parity and the number of stop bits (signifying the end of a character). So a designation of 9600-8-E-2 would be 9,600 bits per second, with eight bits per character, even parity and two stop bits. A common set-up of an asynchronous serial connection would be 9600-8-N-1 (9,600 bit/s, 8 bits per character, no parity and 1 stop bit) - a total of 10 bits transmitted to send one 8 bit character (one start bit, the 8 bits making up the byte transmitted and one stop bit). This is an overhead of 20%, so a 9,600 bit/s asynchronous serial link will not transmit data at 9600/8 bytes per second (1200 byte/s) but actually, in this case 9600/10 bytes per second (960 byte/s), which is considerably slower than expected. It can get worse. If parity is specified and we use 2 stop bits, the overhead for carrying one 8 bit character is 4 bits (one start bit, one parity bit and two stop bits) - or 50%! In this case a 9600 bit/s connection", "– all txn reads will see a consistent snapshot of the database – the txn successfully commits only if no updates it has made conflict with any concurrent updates made since that snapshot. • SI does not guarantee serializability! – SerializableSI: Stronger, more conservative protocol • Implemented in Oracle, MS SQL Server, Postgres. 61 Snapshot isolation • Conceptually, txn works on a copy of the db made at txn start time. – Very expensive à not implemented that way but still expensive. – Guarantees that reads in the txn see a consistent version of the db. • At commit time, verify that the values changed by the transaction have not been changed by other transactions since the snapshot was taken. • Write skew anomaly – Not serializable, but permitted by snapshot isolation! 62 T1: R(X)R(Y) W(X) C T2: R(X)R(Y) W(Y) C Write skew – (more concrete) example 63 [Source: Martin Kleppmann] Discussion • SI is related to optimistic CC, in that – Conceptually, snapshots are created at txn start. – There is an analysis phase at the end to decide whether a transaction may commit (do writesets overlap?). • Multiversion CC is a way to implement (a stronger) snapshot isolation. 64", "one-copy-serializability model. The \"CORBA Fault Tolerant Objects standard\" is based on the virtual synchrony model. Virtual synchrony was also used in developing the New York Stock Exchange fault-tolerance architecture, the French Air Traffic Control System, the US Navy AEGIS system, IBM's Business Process replication architecture for WebSphere and Microsoft's Windows Clustering architecture for Windows Longhorn enterprise servers. Systems that support virtual synchrony Virtual synchrony was first supported by Cornell University and was called the \"Isis Toolkit\". Cornell's most current version, Vsync was released in 2013 under the name Isis2 (the name was changed from Isis2 to Vsync in 2015 in the wake of a terrorist attack in Paris by an extremist organization called ISIS), with periodic updates and revisions since that time. The most current stable release is V2.2.2020; it was released on November 14, 2015; the V2.2.2048 release is currently available in Beta form. Vsync aims at the massive data centers that support cloud computing. Other such systems include the Horus system the Transis system, the Totem system, an IBM system called Phoenix, a distributed security key management system called Rampart, the \"Ensemble system\", the Quicksilver system, \"The OpenAIS project\", its derivative the Corosync Cluster Engine and several products (including the IBM and Microsoft ones mentioned earlier). Other existing or proposed protocols Data Distribution Service Pragmatic General Multicast (PGM) QuickSilver Scalable Multicast Scalable Reliable Multicast SMART Multicast Library support JGroups (Java API) Spread: C/C++ API, Java API RMF (C# API) hmbdc open source (headers only) C++ middleware, ultra-low latency/high throughput, scalable and reliable inter-thread, IPC and network messaging References Further reading Reliable Distributed Systems: Technologies, Web Services and Applications. K.P. Birman. Springer Verlag (1997). Textbook, covers a broad spectrum of distributed computing concepts, including virtual synchrony. Distributed Systems: Principles and Paradigms (2nd Edition). Andrew S. Tanenbaum, Maarten van Steen (2002). Textbook, covers a broad spectrum of distributed computing concept", "in a blocking state. Upon the completion of the task, the server is notified by a callback. The server unblocks the client and transmits the response back to the client. In case of thread starvation, clients are blocked waiting for threads to become available. See also Asynchronous system Asynchronous circuit" ]
[ "Divide error", "Timer", "Page Fault", "Keyboard" ]
['Divide error', 'Page Fault']
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
4