Operating Systems | Subject Wise Gate CS Questions

Operating Systems | Subject Wise

Question 1
Page fault frequency in an operating system is reduced when the:
A
Process tend to be I/O bound
B
Locality of reference is applicable to the process
C
Size of pages is reduced
D
Process tend to be CPU bound
Question 1 Explanation: 
The Principle of Locality of Reference justifies the use of cache Memory.
In other words, Locality of Reference refers to the tendency of the computer program to access instructions whose addresses are near one another.

locality of reference. There are two ways with which data or instruction is fetched from main memory and get stored in cache memory. These two ways are the following:

Temporal Locality :
Temporal locality means current data or instruction that is being fetched may be needed soon.
In Temporal Locality The Least Recently Algorithm is Used (LRU)
Clustering in time: items referenced in the immediate past have a high probability of being re-referenced in the immediate future

Spatial Locality:
Spatial locality means instruction or data near to the current memory location that is being fetched, may be needed soon in the near future.
Clustering in space: items located physically near an item referenced in the immediate past have a high probability of being re-referenced in the immediate future  
Question 2
Social network analysts use ____ to access facebook data
A
Facebook system calls
B
Facebook APIs
C
Facebook Scripts
D
Facebook System Libraries
Question 2 Explanation: 
Social network analysis use to Facebook api access Facebook data
API is the acronym for Application Programming Interface, which is a software intermediary that allows 2 applications to talk to each other.
Question 3
Time taken to switch between user and kernel models is___the time taken to switch between two processes.
A
More than
B
Independent of
C
Less than
D
Equal to
Question 3 Explanation: 
  • Time taken to switch between user and kernel models is LESS THEN the time taken to switch between two processes.
  • Time taken to switch between 2 processes is very high as compared to time taken to switch between kernel and user mode of execution because
  • We need to do a context switch when we switch processes and we need to save the previous process PCB(Process control Block) and its registers and then load the new process PCB and its registers
  • But when you switch between Kernal and User mode is a very fast operation because(OS has to just change single bit at hardware level)
  • However, context switch involves a switch to the kernel mode too But switching to kernel mode is not so fast. It is a system call.
Question 4
How many page faults occur in LRU page replacement algorithm for the given reference string, with four page frames
7,0,1,2,0,3,4,2,3,0,3,2,1,2,0,1
A
6
B
8
C
7
D
9
Question 4 Explanation: 
Least Recently Used
Question 5
Which of the following is correct?
A
In asymmetric multiprocessing, the processor are peers.
B
In symmetric multiplexing, the processors are placed symmetrically on the motherboard
C
Clustered systems are used for high performance computing
D
All multiprocessor systems are multicore systems
Question 5 Explanation: 
option(A) In asymmetric multiprocessing, the processor are peers. - False
In asymmetric multiprocessing, processors are controlled by one central processor or master processor. CPUs are not self scheduling in AMP. In this, master slave approach is used.

option(B) In symmetric multiplexing, the processors are placed symmetrically on the motherboard - False
In this, all processors are self scheduling in nature. They share a common memory space. All have same precedence. These are useful for time sharing systems. So, given statement is incorrect as all processors can do the operating system task.

option(C) Clustered systems are used for high performance computing. - True
Clustered systems result in high performance as they contain two or more individual computer systems merged together. These work as a parallel unit and result in much better performance for the system.

option(D) All multiprocessor systems are multicore systems - False
A processor that has more than one core is called Multicore Processor while one with single core is called Unicore Processor or Uniprocessor
Two or more processors or CPUs present in same computer, sharing system bus, memory and I/O is called Multi Processing System. It allows parallel execution of different processors.
In a multicore system, we have only one CPU and multiple cores are present in that CPU. While in a multiprocessor system, we have more than one CPU.
If you want to run a single program then the multicore system will be faster. But if you are running multiple programs then the multiprocessor system will be faster.
Question 6
File operations that manipulate the ‘open-count’ maintained for each entry in open file table include:
A
Open,write
B
Read,write
C
Write,close
D
Open,close
Question 6 Explanation: 
Operating System maintains a small table called the open-file table. It containing information about all open files.
-----When a file operation is requested, the file is specified via an index into this table, so no searching is required.
------When the file is no longer being actively used, it is closed by the process, and the OS removes its entry from the open-file table.

Open-File table also has an open count associated with each file to indicate how many processes that have the file open.
Each close () decreases this open count, and when the open count reaches zero, the file is no longer in use, and the file's entry is removed from the open-file table.

Refer : http://boron.physics.metu.edu.tr/ozdogan/OperatingSystems/week11/node4.html
Question 7
In a computer system, memory mapped access takes 100 nanoseconds when a page is found in TLB In case the page is not TLB, it takes 400 nanoseconds to access. Assuming a hit ratio of 80%, the effective access time is:
A
120ns
B
160ns
C
200ns
D
500ns
Question 7 Explanation: 
Given,
Hit ratio = H = 80 % = 0.8
Miss ratio = (1 – H) = 20 % = 0.2
TLB access time = 100ns
Main Memory access time = 400ns
Page is found = (T + M) = 100 ns
Page is not found = (T + 2 × M) = 400 ns

Formula:
Effective memory access time (EAT) =
TLB_Miss_Time * (1- hit_ratio) + TLB_hit_time * hit_ratio

Calculation:
EAT = 0.8*100+0.2*400
=80 + 80
= 160
Question 8
The producer and consumer processes share the following variables:
int n;
Semaphore M=1
Semaphore E=n
Semaphore F=0
The consumer process must execute ______ and before removing an item from buffer.
A
Signal(M), signal(F)
B
Signal(M), wait(F)
C
Signal(F), wait(M)
D
Wait(F), wait(M)
Question 8 Explanation: 
producer- consumer problem So option(D) is correct Answer
Question 9
Disk requests come to a disk driver for cylinders in the order 10, 22, 20, 2, 40, 6 and 38 at a given time when the given disk drive is reading from cylinder 20. The seek time is 6ms per cylinder. What is the total seek time, if the disk arm scheduling algorithm FCFS is used?
A
900 ms
B
850 ms
C
360 ms
D
876 ms
Question 9 Explanation: 
When the disk drive is reading from cylinder 20 According to FCFS Scheduling it will server (20),10, 22, 20, 2, 40, 6 ,38 FCFS Scheduling Total seek time= (10 + 12 + 2 + 18 + 38 + 34 + 32)*6
= 146*6
= 876 ms
Question 10
If a processor has 32 bit virtual address, 28 bit physical address, 2 KB pages. How many bits are required for the virtual, physical page number?
A
17,21
B
21,17
C
6,10
D
None
Question 10 Explanation: 
Virtual address = 32 bit
Virtual address space = 2​32 B
Physical address = 28 bit
Physical address space =  2​28​ B
Page size = 2KB =2​11

Number of entries for virtual page number  
= 2​virtual address​ / Pages Size
= 2​32​ /2​11​ =2​21

Number of entries for physical page number
= 2​ physical address​ / Pages  Size
=2​28​ /2​11​ =2​17

∴ Number bits for virtual page number =21.
∴ Number bits for physical page number = 17.
Question 11
The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 records per page, with 1 free main memory frame is recorded as follows.
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260,0320, 0370,
What is the number of page faults?
A
15,4
B
6,4
C
7,2
D
4,6
Question 11 Explanation: 
Page size = 100 records
Frame size = 100 records
  • 0100−1 page fault       → Records Range [0100−0199 ]
  • 0200−2  page faults  → Range[0200−0299]
  • 0430−3  page faults. → Range[0400−0499]
  • 0499−3  page faults. → Range[ 0400−0499]
  • 0510−4  page faults. → Range[ 0500−0599 ]
  • 0530−4  page faults → Range[ 0500−0599 ]
  • 0560−4 page faults → Range[ 0500−0599 ]
  • 0120−5  page faults → Range[ 0100−0199 ]
  • 0220−6  page faults → Range[ 0200−0299 ]
  • 0240−6  page faults → Range[ 0200−0299 ]
  • 0260−6 page faults → Range[ 0200−0299 ]
  • 0320−7  page faults → Range[ 0300−0399 ]
  • 0370−7  page faults → Range[ 0300−0399 ]
Total number of page fault is 7.
Question 12
The major functions or requirements for an I/O module fall into which of the following categories?
A
Control and timing
B
Processor Communication
C
Data buffering
D
All of these
Question 12 Explanation: 
The major functions or requirements for an I/O module fall into the following categories
1) Control and timing
2) Processor communication
3) Device communication
4) Data buffering
5) Error detection
Question 13
A web page contains many small images. We can speed up the performance of the browser by
A
Having multiple processes executed simultaneously
B
Having multiple threads
C
Increasing the speed of the transmission line
D
Increasing the main memory of the system
Question 13 Explanation: 
Assign 1 thread for each image so that all images will load faster and performance of the browser will speed up
Question 14
If a process is runnable but is temporarily stopped to let another process run, in which state is the process said to be?
A
Running
B
Ready
C
Interrupted
D
Blocked
Question 14 Explanation: 
process state transition diagram process state transition diagram process state transition diagram process state transition diagram
Question 15
What is coalescing?
A
It is a second strategy for allocating kernel memory
B
The buddy system allocates memory from a fixed size segment consistency of physically contiguous pages
C
Kernel memory is often allocated from a free memory pool different from the list used to satisfy ordinary user mode processes
D
An advantage of the buddy system is how quickly adjacent buddies can be combined to form larger segments using this technique
Question 15 Explanation: 
buddy system
Question 16
Real time operating system always runs on
A
Linux
B
Unix
C
Embedded system
D
Apple’s Mac OS
Question 16 Explanation: 
  • A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time applications that process data as it comes in, typically without buffer delays.
  • Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter increments of time.
  • A real-time system is a time-bound system which has well-defined, fixed time constraints. Processing must be done within the defined constraints or the system will fail. They either are event-driven or time-sharing.
  • Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.
  • Most RTOS use a pre-emptive scheduling algorithm
  • RTLinux is a hard realtime real-time operating system (RTOS) microkernel that runs the entire Linux operating system as a fully preemptive process.
Question 17
This begins at the root and follows a path down to the specified file, giving the directory names on the path. This is known as
A
Absolute path name
B
Relative path name
C
Definite path name
D
Indefinite path name
Question 17 Explanation: 
A path is either relative or absolute.
An absolute path always contains the root element and the complete directory list required to locate the file.
Example:
/home/sally/statusReport is an absolute path. All of the information needed to locate the file is contained in the path string.
Linux absolute path example 2
/home/users/c/computerhope/public_html/cgi-bin
Internet URL absolute path
https://www.academyera.com/oh.htm

A relative path needs to be combined with another path in order to access a file. Example
joe/foo is a relative path. Without more information, a program cannot reliably locate the joe/foo directory in the file system.
Linux relative path
./public_html/cgi-bin
Internet URL relative path
oh.htm
directory Structure
Question 18
Each process is contained in a single section of memory that is contiguous to the section containing the next process is called
A
Contiguous memory​ ​ protection
B
Contiguous path name
C
Definite path name
D
Indefinite path name
Question 18 Explanation: 
  • In contiguous memory allocation each process is contained in a single contiguous block of memory. Memory is divided into several fixed size partitions.
  • Each partition contains exactly one process. When a partition is free, a process is selected from the input queue and loaded into it.
  • The free blocks of memory are known as holes. The set of holes is searched to determine which hole is best to allocate.
Question 19
Both the first fit and best fit strategies for memory allocation suffer from
A
External fragmentation
B
Internal fragmentation
C
50-percent rule
D
Segmentation
Question 19 Explanation: 
First Fit algorithm Best Fit algorithm
Question 20
The simplest, but most expensive approach to introductory redundancy is duplicate to every disk. This technique is called
A
Swap space
B
Mirroring
C
Page slots
D
None of these
Question 20 Explanation: 
mirroring
Question 21
_____ is a sequence of memory read-write operations that are atomic
A
Critical section object
B
Adaptive mutex
C
Turnstile
D
Memory transaction
Question 21 Explanation: 
A memory transaction is a sequence of read-write operations to memory that are performed atomically.
void update()
{
read/write memory
}
Question 22
The term____ refers to the use of electronics and software within a product, as opposed to a general purpose computer such as a laptop or desktop system
A
Complex system
B
Simple system
C
Fuzzy logic
D
Embedded system
Question 22 Explanation: 
fuzzy logic
Question 23
Sequential, direct, random and associative are access methods and key characteristics of computer____system
A
Stack
B
Count
C
Memory
D
Core
Question 23 Explanation: 
Memory Access Methods
Question 24
In a monolithic kernel, operating system runs in
A
User mode
B
Supervisor mode
C
User/supervisor mode
D
None of these
Question 24 Explanation: 
  • Monolithic kernel means that the whole operating system runs in kernel mode (i.e. highly privileged by the hardware). That is, no part of the OS runs in user mode (lower privilege). Only applications on top of the OS run in user mode.
  • In non-monolithic kernel operating systems, such as Windows, a large part of the OS itself runs in user mode.
  • In either case, the OS can be highly module
Question 25
One scheme for communication between user thread library and the kernel is known as
A
Lightweight process
B
Upcall handler
C
Scheduler activation
D
Cleanup handler
Question 25 Explanation: 
Scheduler activations
Question 26
A classic software based solution to the critical section problem is known as
A
Peterson’s solution
B
Process synchronization
C
Coordination
D
Race Condition
Question 26 Explanation: 
 Peterson's solution
Question 27
Consider a set of n tasks with known runtimes R​1​ ,R2​ ,Rn​ , to be run uniprocessor machine. Which of the following processor scheduling algorithm will result in the maximum throughput?
A
Priority scheduling
B
Round robin
C
FCFS
D
SJF
Question 27 Explanation: 
-Throughput means total number of tasks executed per unit time.
-If the number of processors are increased then amount of work done is increased and time is decreased.
-In Shortest Job First scheduling algorithm has maximum throughput because in this scheduling technique the processor selects the process with the smallest execution time to execute next. Hence, maximum number of tasks are completed.

Note :
  • Highest Response Ratio Next policy favors shorter jobs but it also limits the waiting time of longer jobs.
  • Priority scheduling is a non-preemptive scheduling algorithm.
  • Round-robin scheduling is a preemptive scheduling algorithm.
  • Shortest job first scheduling results in maximum throughput.
  • First come first serve scheduling algorithm is the simplest scheduling algorithm.
Question 28
OLE, a microsoft's component document technology, means
A
Overlay linking and exchange
B
Online linking and embedding
C
Open learning exchange
D
Object linking and embedding
Question 28 Explanation: 
Object Linking & Embedding (OLE) Refer this link for more information :
https://en.wikipedia.org/wiki/Object_Linking_and_Embedding
Question 29
The problem of indefinite blockage of low priority jobs in general priority scheduling algorithm can be solved using
A
Swapping
B
Dirty bit
C
Aging
D
Compaction
Question 29 Explanation: 
A major problem with the priority scheduling algorithm is indefinite blocking or starvation. This algorithm can leave some low priority processes waiting indefinitely.

The solution to the problem of starvation is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.

For example - If priorities range from 100 to 0(high), we could increase the priority of a waiting process by 1 every 15 minutes. Due to this, even a process with 100 priority would have the highest priority.
Question 30
A Thread is also called
A
A scheduler
B
A virtual process
C
A heavyweight process
D
A lightweight process
Question 30 Explanation: 
Threads are sometimes called lightweight processes because the threads share the common memory space.
The memory allocated to the main thread will be shared by all other child threads.
Whereas in case of Process, the child process are in need to allocate the separate memory space
Question 31
Copying a process from memory to disk to allow space for other processes is called___
A
Demand paging
B
Deadlock
C
Page fault
D
Swapping
Question 31 Explanation: 
Swapping
Question 32
If the time quantum size is 2 units of an there is only one job of 14 time unit in a ready queue, the round robin scheduling algorithm will cause___ connected switches.
A
1
B
5
C
6
D
7
Question 32 Explanation: 
  • It clearly says "There is only one job" So, Round robin algorithm won't perform any context switch
  • But according to their intention it may cause total 6 connected switches i.e initially started with 0 ( 0-2 | 2-4 | 4-6 | 6-8 | 8-10 | 10-12 | 12-14 )
Question 33
The memory which does not lose its content on failure of power supply is known as___memory.
A
Main memory
B
Volatile
C
Non volatile
D
ROM
Question 33 Explanation: 
  • Volatile and Non-Volatile Memory are both types of computer memory.
  • Volatile Memory is used to store computer programs and data that CPU needs in real time and is erased once computer is switched off. RAM and Cache memory are volatile memory.
  • Non-volatile memory is static and remains in the computer even if computer is switched off. ROM and HDD are non-volatile memory.
Question 34
The kernel keeps track of the state of each executing program by using a data structure called__
A
Process control block
B
User control block
C
File control block
D
Memory control block
Question 34 Explanation: 
Process Control Block
Question 35
An operating system contains 3 user processes each requiring 2 units of resource R. The minimum number of units of R such that no deadlock will ever occur is
A
5
B
2
C
3
D
4
Question 35 Explanation: 
In order to avoid deadlock minimum number of resources required is "n+1" If there are "n" process.
Given n = 3
so minimum we require (3+1=4) resources In order to avoid deadlock.

If we give 1 resource to 1 process then total resource = 1 + 1 + 1 = 3 but in this case deadlock will definitely occur because each process hold 1 unit resource and waiting for another resource so if we increase 1 more resource (3+1=4) then deadlocks will ever arise i.e., when process 1 complete their execution then it free 2 resource and this 2 resource will used by another process.
Question 36
The segmentation memory management scheme suffers from:
A
External fragmentation
B
Internal fragmentation
C
Starvation
D
Ageing
Question 36 Explanation: 
Correct answer is External fragmentation
Segmentation
Question 37
Which of the following technique allows execution of programs larger than the size of physical memory?
A
Thrashing
B
DMA
C
Buffering
D
Demand Paging
Question 37 Explanation: 
Virtual memory is a memory management scheme that allows the execution of processes that may not be completely in main memory. In other words, virtual memory allows execution of partially loaded processes. As a consequence user programs can be larger than the physical memory.
Question 38
In page replacement, ‘adding more frames may cause more page faults’ is referred to as:
A
Thrashing
B
Belady’s anomaly
C
Banker’s anomaly
D
Ageing
Question 38 Explanation: 
Bélády’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in following page replacement algorithms
First in first out (FIFO)
Second chance algorithm
Random page replacement algorithm
Question 39
____ is an IEEE 1003.1C standard API for thread creation and synchronization in operating system.
A
Mac OS X
B
Solaris
C
POSIX
D
Kernel
Question 39 Explanation: 
POSIX POSIX
Question 40
If a system has multiple instances of resources, to avoid deadlock which of the following algorithms is used?
A
Deadlock avoidance algorithm
B
Aging algorithm
C
Resource allocation graph algorithm
D
Banker’s algorithm
Question 40 Explanation: 
Banker’s Algorithm is a deadlock avoidance algorithm. It is also used for deadlock detection. This algorithm tells that if any system can go into a deadlock or not by analyzing the currently allocated resources and the resources required by it in the future.
Refer this link for more information : https://afteracademy.com/blog/what-is-bankers-algorithm
Question 41
Which of the following is not a CPU scheduling criteria:
A
Dispatch latency
B
CPU utilization
C
Throughput
D
Turnaround time
Question 41 Explanation: 
CPU Scheduling Criteria
Question 42
An address in the memory is called
A
Physical address
B
Logical address
C
Memory address
D
Word address
Question 42 Explanation: 
physical address and logical address
Question 43
In which of the following state, the process is waiting for processor?
A
Running
B
New
C
Ready
D
Waiting
Question 43 Explanation: 
process state transition diagram
process state transition diagram
process state transition diagram
process state transition diagram
Question 44
The mechanism that brings a page memory only when it is needed in___
A
Page replacement
B
Segmentation
C
Fragmentation
D
Demand paging
Question 44 Explanation: 
demand paging
Question 45
Semaphores are used to solve the problem of
A
Mutual exclusion
B
Race condition
C
Process synchronization
D
The belady problem
Question 45 Explanation: 
***Question setter may be drunk while framing the question*****
Semaphores main job is to provide Mutual exclusion to ensure process synchronization and Race condition is the problem and to solve that problem we need to do Mutual exclusion and Process Synchronization


Semaphores in Process Synchronization
Question 46
Consider the following statements
S1: a small page size causes large page tables
S2: Internal fragmentation is increase with small pages
S3: I/O transfers are more efficient with large pages
Which of the following is true?
A
S1 is true and S3 is false
B
S1 and S2 are true
C
S2 and S3 are true
D
S1 is true and S2 is false
Question 46 Explanation: 
S1: A small page size causes large page tables - True
Smaller page size means more pages required per process. It means large page tables are needed.

S2: internal fragmentation is increased with small pages- False
Internal fragmentation means when process size is smaller than the available space. When pages are small, then available space becomes less and there will be less chances of internal fragmentation.

S3: I/O transfers are more efficient with large pages- True
An I/O system is required to take an application I/O request and send it to the physical device. Transferring of I/O requests are more efficient with large pages.
Question 47
In __________ disk scheduling algorithm, the disk head moves from one end to other end of the disk, serving the requests along the way. When the head reaches the other end, it immediately returns to the beginning of the disk without serving any requests on the return trip.
A
LOOK
B
SCAN
C
C-LOOK
D
C-SCAN
Question 47 Explanation: 
In  c-scan disk scheduling algorithm, the disk head moves from one end to other end of the disk, serving the requests along the way.
When the head reaches the other end, it immediately returns to the beginning of the disk without serving any requests on the return trip.
Refer : http://www.cs.iit.edu/~cs561/cs450/disksched/disksched.html
Question 48
Suppose there are six files F1, F2, F3, F4, F5, F6 with corresponding sizes 150 KB, 225KB,75 KB, 60 KB, 275 KB and 65 KB respectively. The files are to be stored on a sequential device in such a way that optimizes access time. In what order should the files be stored ?
A
F5, F2, F1, F3, F6, F4
B
F4, F6, F3, F1, F2, F5
C
F1, F2, F3, F4, F5, F6
D
F6, F5, F4, F3, F2, F1
Question 48 Explanation: 
Given file sizes,
F1=150 KB,
F2 = 225 KB,
F3 =75 KB,
F4= 60 KB,
F5 =275 KB,
F6 =65 KB.

For access time optimization sort the files in increasing order of file size.

F4= 60 KB, F6 =65 KB, F3 =75 KB, F1=150 KB,F2 = 225 KB, F5 =275 KB.

(F4,F6)=125 KB
((F4,F6),F3)=200 KB
(((F4,F6),F3),F1)=350 KB
(F2,F5)= 500 KB
Finally (F4,F6,F3,F1) (F2,F5) = 700 KB
So order will be F4,F6,F3,F1,F2,F5
Question 49
Which module gives control of the CPU to the process selected by the short - term scheduler ?
A
Dispatcher
B
Interrupt
C
Schedular
D
Threading
Question 49 Explanation: 
Dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system call.

The functions of a dispatcher are :
  • Context switches in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
  • Switching to user mode.
  • Jumping to the proper location in the user program to restart that program indicated by its new state.
Question 50
Two atomic operations permissible on Semaphores are __________ and __________.
A
Wait, stop
B
Wait, hold
C
Hold, signal
D
Wait, signal
Question 50 Explanation: 
operations on Semaphores
Question 51
Consider the following statements:
a) DBA uses ‘grant’ statement to confer authentication
b) Privileges granted to ‘public’ are granted only to current users.
c) Privileges granted to a user cannot be passed on to other users
Which of the following is the correct option?
A
Only a is correct
B
Both a and b are correct
C
Both a and c are correct
D
All a,b,c are correct
Question 51 Explanation: 
DBA  Privileges
Question 52
Consider a pair of producer consumer process, sharing the following variables:
int n;
Semaphore M=1;
Semaphore E=n; /*size of buffer*/
Semaphore F=0;
The consumer process must execute ____ and _____ after removing an item from the buffer.
A
Wait(M), signal(E)
B
Signal(E),signal(M)
C
Signal(M),signal(E)
D
Signal(E),wait(M)
Question 52 Explanation: 
producer- consumer problem
Question 53
First fit and best fit strategies for memory allocation suffer from ____ and _____ fragmentation, respectively.
A
Internal,internal
B
Internal,external
C
External,external
D
External,internal
Question 53 Explanation: 
First Fit algorithm Best Fit algorithm
Question 54
Consider a hard disk of capacity 1 TB, and block size 1 KB If the free space list is maintained as a bitmap, then the size of the bit vector is
A
1012 bits
B
109 bits
C
106 bits
D
103 bits
Question 54 Explanation: 
In Bitmap, I bit is required to store the information about the block. If the block is in use then its set as 1 else it set as 0.
Given data,
Total Memory(Hard Disk capacity) = 1 TB= (10^12) bytes
Block size = 1KB = (10^3) bytes
Number of Block = (10^12/10^3)= (10^9)
So, we need 10^9 bit to store all the information about bitmap,
Size of the bit vector = 10^9 bits

Note:
K = 10^3
M=10^6
G=10^9
T=10^12
Question 55
Degree of multiprogramming is controlled by
A
Medium term scheduler
B
Long term scheduler
C
Number of schedulers in process management module
D
CPU scheduler
Question 55 Explanation: 
Degree of multiprogramming
Question 56
Which of the following statement(s) regarding a linker software is/are true ?
I A function of a linker is to combine several object modules into a single load module.
II A function of a linker is to replace absolute references in an object module by symbolic references to locations in other modules.
A
Only I
B
Only II
C
Both I and II
D
Neither I nor II
Question 56 Explanation: 
Statement I  - A function of a linker is to combine several object modules into a single load module - True
linker or link editor is a computer system program that takes one or more object files (generated by a compiler or an assembler) and combines them into a single executable file, library file, or another "object" file

Statement II -  A function of a linker is to replace absolute references in an object module by symbolic - False
linker always generate relocatable address 
Question 57
There are three processes P​1​ , P​2​ and P​3​ sharing a semaphore for synchronizing a variable. Initial value of semaphore is one. Assume that negative value of semaphore tells us how many processes are waiting in queue. Processes access the semaphore in following order :
(a) P​2​ needs to access
(b) P​1​ needs to access
(c) P​3​ needs to access
(d) P​2​ exits critical section
(e) P​1​ exits critical section
The final value of semaphore will be :
A
0
B
1
C
-1
D
-2
Question 57 Explanation: 
Given,
There are three processes P1, P2 and P3
Initial value of semaphore S = 1
  1. P2 needs to access So we are allocating the critical section to P2 So, decreases semaphore value by 1, now S value became 0 (no one is waiting)
  2. P1 needs to access decreases semaphore by 1, now S value became -1 (but processor is not available because p2 is accessing. So, one process is waiting)
  3. P3 needs to access decreases semaphore value by 1, new S value became -2 ( Still processor not available so p1 and p3 process are waiting total 2 process are waiting)
  4. P2 exits critical section increases semaphore value by 1, new S value will be -1 ( one process is waiting now p1 entered the critical section)
  5. P1 exits critical section increases semaphore by 1, new S value will be 0 ( now p3 entered critical section So no process is waiting)
Question 58
In a paging system, it takes 30 ns to search translation Lookaside Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%, the effective memory access time is :
A
48ns
B
147ns
C
120ns
D
84ns
Question 58 Explanation: 
Effective memory access(EMA) =
Hit ratio * (TLB access time + Main memory access time) + (1-Hit ratio) * (TLB access time + (L+1) * main memory time)
Where,
L = Number of levels of page table
Miss Rate = (1-Hit ratio)
Note : This formula is valid only when there are no page faults

Given,
Number of levels of page table (L) = 1
TLB access time = 30 ns
Main memory access time = 90 ns
TLB Hit ratio = 70% = 0.7
EMA = ?
EMA =Hit ratio*(TLB access time + Main memory access time) +(1-Hit ratio) * (TLB access time + 2 * main memory time)
EMA=0.7*(30+90)+0.3(30+(2*90))
=0.7*120 + 0.3(30+(180))
=0.7*120 + 0.3*210
= 84 + 63
= 147 ns
Question 59
Which of the following scheduling algorithms may cause starvation ?
a. First-come-first-served
b. Round Robin
c. Priority
d. Shortest process next
e. Shortest remaining time first
A
A, c and e
B
C, d and e
C
B, d and e
D
B, c and d
Question 59 Explanation: 
Starvation occurs when a low priority process is requesting for a system resource but the resource is never allocated to this process because a higher priority process is utilizing that resource for an extended period.

Round Robin Scheduling: Each process is served by the CPU for a fixed time quantum, so all processes are given the same priority. Starvation doesn't occur because for each round robin cycle, every process is given a fixed time to execute. No process is left behind.

First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm and FCFS can cause short processes to wait for very long processes. but it is clear that other process will definitely get their chance to execute, so it will not suffer from starvation.

In Priority based scheduling if higher priority process keep on coming then low priority process will suffer from starvation.

In Shortest Job First(SJF) if process with short process time keep on coming continuously then process with higher burst time will do wait and suffer from starvation.

As the Shortest remaining time first(SRTF) scheduling is a preemptive version of shortest job scheduling. It may cause starvation as shorter processes may keep coming and a long CPU burst process never gets CPU.
Question 60
Distributed operating systems consist of:
A
Loosely coupled O.S. software on a loosely coupled hardware.
B
Loosely coupled O.S. software on a tightly coupled hardware.
C
Tightly coupled O.S. software on a loosely coupled hardware.
D
Tightly coupled O.S. software on a tightly coupled hardware.
Question 60 Explanation: 
  • A distributed system is a collection of independent computers that appear to the users of the system as a single computer.
  • A Distributed operating system is usually defined as running on more loosely coupled hardware .
  • There is no shared memory, but provides the “feel” of a single memory by Tightly-Coupled software.
  • There is one single Operating System, the goal of Distributed system is to make the collection of machines behave more like a single machine. So that means there is tightly coupled OS running on a loosely coupled hardware.
Question 61
Consider a system with seven processes A through G and six resources R through W.
Resource ownership is as follows :
process A holds R and wants T
process B holds nothing but wants T
process C holds nothing but wants S
process D holds U and wants S & T
process E holds T and wants V
process F holds W and wants S
process G holds V and wants U
Is the system deadlocked ?
If yes, ______ processes are deadlocked.
A
No
B
Yes, A, B, C
C
Yes, D, E, G
D
Yes, A, B, F
Question 61 Explanation: 
Deadlock
Question 62
Suppose that the virtual Address space has eight pages and physical memory with four page frames. If LRU page replacement algorithm is used, _____ number of page faults occur with the reference string.
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 2 3 4 1
A
11
B
12
C
10
D
9
E
None
Question 62 Explanation: 
LRU page replacement algorithm
Question 63
Suppose that the virtual Address space has eight pages and physical memory with four page frames. If LRU page replacement algorithm is used, _____ number of page faults occur with the reference string.
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 2 3 4 1
A
11
B
12
C
10
D
9
E
None
Question 63 Explanation: 
LRU page replacement algorithm
Question 64
Consider a system having ‘m’ resources of the same type. These resources are shared by three processes P​1​ , P​2​ and P​3​ which have peak demands of 2, 5 and 7 resources respectively. For what value of ‘m’ deadlock will not occur ?
A
70
B
14
C
13
D
7
E
None
Question 64 Explanation: 
P1 requires 2, P2 requires 5, P3 requires 7
In worst case, The number of units that each process holds = One less than its maximum demand
  • Process P1 holds 1 units of resource R
  • Process P2 holds 4 units of resource R
  • Process P3 holds 6 units of resource R
Maximum number of units of resource R that ensures deadlock = 1 + 4 + 6 = 11
Minimum number of units of resource R that ensures no deadlock = 11 + 1 = 12
So, any number of units greater than 11 will ensure no deadlock.
Note : Given key is option(B) but it satisfies option-A,B,C
Question 65
Five jobs A, B, C, D and E are waiting in Ready Queue. Their expected runtimes are 9, 6, 3,5 and x respectively. All jobs entered in Ready queue at time zero. They must run in _____order to minimize average response time if 3 < x < 5.
A
B, A, D, E, C
B
C, E, D, B, A
C
E, D, C, B, A
D
C, B, A, E, D
Question 65 Explanation: 
In "Shortest job first scheduling" Average response time is minimum. Assuming x = 4, Shortest job first scheduling sequence shall be C(3), E(4), D(5), B(6) and A(9)

Let,
P = Process
BT
= Burst or Execution time
CT = completion time
AWT = Average Waiting Time
AT = Arrival time
WT = Waiting time
Shortest job first scheduling
Question 66
Consider three CPU intensive processes P1 , P2 , P3 which require 20, 10 and 30 units of time, arrive at times 1, 3 and 7 respectively. Suppose operating system is implementing Shortest Remaining Time first (preemptive scheduling) algorithm, then _____ context switches are required (suppose context switch at the beginning of Ready queue and at the end of Ready queue are not counted).
A
3
B
2
C
4
D
5
Question 66 Explanation: 
BT = Burst or Execution time
P = Process
AT = Arrival time
Shortest Remaining Time first (preemptive scheduling)
P1 to P2 = 1 switch
P2 to P1 =2nd switch
P1 to P3 =3rd switch
Question 67
If the Disk head is located initially at track 32, find the number of disk moves required with FCFS scheduling criteria if the disk queue of I/O blocks requests are:
98, 37, 14, 124, 65, 67
A
320
B
322
C
321
D
319
Question 67 Explanation: 
FCFS disk scheduling
Question 68
A scheduling Algorithm assigns priority proportional to the waiting time of a process. Every process starts with priority zero (lowest priority). The scheduler reevaluates the process priority for every ‘T’ time units and decides next process to be scheduled. If the process have no I/O operations and all arrive at time zero, then the scheduler implements _________ criteria.
A
Priority scheduling
B
Round Robin Scheduling
C
Shortest Job First
D
FCFS
Question 68 Explanation: 
The given scheduling algorithm works as round robin with Time quantum equals to T units, after a process is scheduled it gets executed for T time units, its waiting time becomes least and its turn comes again after every other process has completed T time units.
Question 69
Suppose that the number of instructions executed between page fault is directly proportional to the number of page frames allocated to a program.
If the available memory is doubled, the mean interval between page faults is also doubled.
Further, Consider that a normal instruction takes one microsecond, but if a page fault occurs, it takes 2001 microseconds.
If a program takes 60 sec to run, during which time it gets 15,000 page faults, how long would it take to run if twice as much memory were available?
A
60 sec
B
30 sec
C
45 sec
D
10 sec
Question 69 Explanation: 
  • Normal instruction takes 1 micro second i.e., 10-6 sec
  • Instruction with page fault takes 2001 micro seconds so page fault will take 2000 microsecond
  • The given program takes 60 sec to run and  In that program there were 15000 page fault
  • so time taken by 1 page faults 2000 micro seconds
  • so time taken by 15000 page faults = 15000x2000 microseconds = 30 sec
  • Out of overall  Given 60 seconds of the program time remaining 30 sec are consumed by program execution
  • If memory is doubled than the mean interval between the page faults is also doubled
  • Hence page fault rate will be reduced by half
  • So when Earlier 30 sec were needed by program instructions and 30 sec for page fault
  • Now program execution takes 30 sec  and page fault will take 15 sec given by (15000x2000x10-3)/2
  • So Total Time = 30+15 =45 sec.
Question 70
Consider a disk with 16384 bytes per track having a rotation time of 16 msec and average seek time of 40 msec. What is the time in msec to read a block of 1024 bytes from this disk?
A
57 sec
B
49 sec
C
48 sec
D
17 sec
Question 70 Explanation: 
Given,
Bytes per track = 16384
Rotational Time = 16 msec
Average Seek Time = 40 msec
Time in msec to read a block of 1024 bytes from this disk = ?
If there are 16384 bytes per track there are 1024/16384 tracks to be read for this block.
Transfer Time(TT) = ((1024 bytes * 16 ms) / (16384 bytes )) = 1 msec
Rotational Latency = 16 msec / 2 = 8 msec
Total time = Seek time + Rotational Latency + Transfer time
= 40 msec + 8 msec +1 msc
= 49 msec
Question 71
A data cube C, has n dimensions, and each dimension has exactly p distinct values in the base cuboid. Assume that there are no concept hierarchies associated with the dimensions. What is the maximum number of cells possible in the data cube, C?
A
P​ n
B
P
C
(2​ n​ - 1)p+1
D
(p + 1)​ n
Question 71 Explanation: 
  • Option(A) : What is the maximum number of cells possible in the base cuboid ?
    pn.
    This is the maximum number of distinct tuples that you can form with p distinct values per dimensions.
  • Option(B) : What is the minimum number of cells possible in the base cuboid?
    p.
    You need at least p tuples to contain p distinct values per dimension. In this case no tuple shares any value on any dimension.
  • Option(C) : What is the minimum number of cells possible in the data cube, C?
    (2n-1)×p+1.
    The minimum number of cells is when each cuboid contains only p cells, except for the apex, which contains a single cell.
  • Option(D): What is the maximum number of cells possible (including both base cells and aggregate cells) in the data cube, C?
    (p+1)n.
    The argument is similar to that of part (a), but now we have p+1 because in addition to the p distinct values of each dimension we can also choose ∗.
Question 72
A disk drive has 100 cyclinders, numbered 0 to 99. Disk requests come to the disk driver for cyclinders 12, 26, 24, 4, 42, 8 and 50 in that order. The driver is currently serving a request at cyclinder 24. A seek takes 6 msec per cyclinder moved. How much seek time is needed for shortest seek time first (SSTF) algorithm?
A
0.984 sec
B
0.396 sec
C
0.738 sec
D
0.42 sec
Question 72 Explanation: 
SSTF : The movement of disk head from 143 to others is as shown in diagram.
shortest seek time first (SSTF) algorithm
So, total head movement = 02+14+04+04+38+08=70 cylinders.
Seek time = 70 X 6 = 420 msec
= 420/1000 msec= 0.42 sec
Note : 1 sec = 1000 msec
Question 73
Let P​i​ and P​j​ be two processes, R be the set of variables read from memory, and W be the set of variables written to memory. For the concurrent execution of two processes P​i​ and P​j​ which of the following conditions is not true?
A
R(P​ i​ ) ∩ W(P​ j​ ) = Ф
B
W(P​ i​ ) ∩ R(P​ j​ ) = Ф
C
R(P​ i​ ) ∩ R(P​ j​ ) = Ф
D
W(P​ i​ ) ∩ W(P​ j​ ) = Ф
Question 73 Explanation: 
For a concurrent execution there should be no read-write ,write-write or write-read conflict
Option (A) : It violates concurrent execution of two process Pi and Pj because it performs Read and Write operation
Option (B) : It violates concurrent execution of two process Pi and Pj because it performs Write and Read operation
Option (C) : It not violates concurrent execution of two process Pi and Pj because it performs Read and Read operation. Moreover Two process can read the data simultaneously.
Option (D) : It violates concurrent execution of two process Pi and Pj because it performs Write and Write operation
Question 74
A LRU page replacement is used with four page frames and eight pages. How many page faults will occur with the reference string 0172327103 if the four frames are initially empty?
A
6
B
7
C
8
D
5
Question 74 Explanation: 
LRU
Question 75
The Unix Kernel maintains two key data structures related to processes, the process table and the user structure. Which of the following information is not the part of user structure?
A
File descriptor table
B
System call state
C
Scheduling parameters
D
Kernel stack
Question 75 Explanation: 
Unix Kernel maintains two key data structures related to processes
Question 76
The directory can be viewed as ________ that translates filenames into their directory entries.
A
Symbol table
B
Partition
C
Swap space
D
Cache
Question 76 Explanation: 
The directory can be viewed as symbol table that translates filenames into their directory entries. The symbol table is created by the compiler front-end as a stand-alone file. The purpose of the table is to provide information that the linker and the debugger need to perform their respective functions

Refer :
https://www3.physnet.uni-hamburg.de/physnet/Tru64-Unix/HTML/APS31DTE/DOCU_014.HTM
Question 77
Consider the following justifications for commonly using the two-level CPU scheduling :
I. It is used when memory is too small to hold all the ready processes.
II. Because its performance is same as that of the FIFO.
III. Because it facilitates putting some set of processes into memory and a choice is made from that.
IV. Because it does not allow to adjust the set of in-core processes.
A
I, III and IV
B
I and II
C
III and IV
D
I and III
Question 77 Explanation: 
The two-level CPU scheduling is used when memory is too small to hold all the ready processes because it facilitates putting some set of processes into memory and a choice is made from that.
Refer : https://en.wikipedia.org/wiki/Two-level_scheduling
Question 78
A specific editor has 200 K of program text, 15 K of initial stack, 50 K of initialized data, and 70 K of bootstrap code. If five editors are started simultaneously, how much physical memory is needed if shared text is used ?
A
1135 K
B
335 K
C
1065 K
D
320 K
Question 78 Explanation: 
Given,
Program Text = 200 K
Initial stack = 15 K
Initialized data=50 K
Bootstrap code=70 K
Number of editors = 5
physical memory is needed if shared text is used = ?
Since 5 editors started simultaneously On 200k of program text along with 15 K of initial stack, 50 K of initialized data, and 70 K of bootstrap code.
Total physical memory needed = Program Text + Initial stack + Initialized data + Bootstrap code
Total physical memory needed = = 200k + 15 K + 50 K + 70 K.
Total physical memory needed = = 335 k
Question 79
Which of the following conditions does not hold good for a solution to a critical section problem ?
A
No assumptions may be made about speeds or the number of CPUs.
B
No two processes may be simultaneously inside their critical sections.
C
Processes running outside its critical section may block other processes.
D
Processes do not wait forever to enter its critical section
Question 79 Explanation: 
In Critical section problem:
  • No assumptions may be made about speeds or the number of CPUs.
  • No two processes may be simultaneously inside their critical sections.
  • Processes running outside its critical section can’t block other processes getting enter into critical section or not it is not going to effect the processes which are running inside the critical section anyway.
  • Processes do not wait forever to enter its critical section.
Refer : http://www2.cs.uregina.ca/~hamilton/courses/330/notes/synchro/node2.html
Question 80
For the implementation of a paging scheme, suppose the average process size be ‘x’ bytes, the page size be ‘y’ bytes, and each page entry requires ‘z’ bytes. The optimum page size that minimizes the total overhead due to the page table and the internal fragmentation loss is given by
A
X/2
B
Xz/2
C
√2xz
D
√ xz/ 2
Question 80 Explanation: 
page table and the internal fragmentation
Question 81
In a demand paging memory system, page table is held in registers. The time taken to service a page fault is 8 m.sec. if an empty frame is available or if the replaced page is not modified, and it takes 20 m.secs., if the replaced page is modified. What is the average access time to service a page fault assuming that the page to be replaced is modified 70% of the time ?
A
11.6 m.sec.
B
16.4 m.sec.
C
28 m.sec.
D
14 m.sec.
Question 81 Explanation: 
Time to serve page fault if frame is empty or page replaced is not modified = 8 m.sec
Time to serve page fault if page is modified = 20 m.sec
Frequency of Page modification = 70% = 70/100 = 0.7
Frequency when page is not modified or frame is empty = (100-70)% = 30% = 0.3
Average access time = (Not modified*Time to serve page fault) + (Modified time * percentage of page modification)
Average access time = (0.3 * 8) + (0.7 * 20)
= 2.4 + 14
= 16.4 m.sec
Question 82
Suppose a system has 12 instances of some resources with n processes competing for that resource. Each process may require 4 instances of the resource. The maximum value of n for which the system never enters into deadlock is
A
3
B
4
C
5
D
6
Question 82 Explanation: 
Given,
Total Number of Resources (R) Available = 12
Maximum need for each resource (N) = 4
Deadlock-free condition :
R ≥ P(N − 1) + 1
Where,
R is total number of resources available,
P is the number of processes,
N is the maximum need for each resource.
12 ≥ P(4 − 1) + 1
11 ≥ 3P
11/3 ≥ P
P ≤ 3.66
P=3
∴Maximum value of n for which the system never enters into deadlock is 3

Note :
-Floor value for maximum
-Ceil value for minimum
Question 83
​To overcome difficulties in Readers-Writers problem, which of the following statement/s is/are true?
1) Writers are given exclusive access to shared objects
2) Readers are given exclusive access to shared objects
3) Both readers and writers are given exclusive access to shared objects.
Choose the correct answer from the code given below:
A
1 only
B
Both 2 and 3
C
2 only
D
3 only
Question 83 Explanation: 
Readers-Writers Problem
Question 84
Suppose P,Q and R are co-operating processes satisfying Mutual Exclusion condition. Then if the process Q is executing in its critical section then
A
‘P’ executes in critical section
B
‘R’ executes in critical section
C
Neither ‘P’ nor ‘Q’ executes in their critical section
D
Both ‘P’ and ‘R’ executes in critical section
Question 84 Explanation: 
Mutual execution means only one process can enter into the critical section at a time so when Process Q is in critical section then no other process can enter into the critical section.
Mutual execution
Question 85
A Computer uses a memory unit with 256K word of 32 bits each. A binary instruction code is stored in one word of memory. The instruction has four parts: an indirect bit, an operation code and a register code part to specify one of 64 registers and an address part. How many bits are there in operation code, the register code part and the address part?
A
7,7,18
B
18,7,7
C
7,6,18
D
6,7,18
Question 85 Explanation: 
 
Indirect 1 bit
Address  Given Address 256kB    28 (256kB) * 210 (1024 bytes/kB) = 218 == 18 bits
Registers Total 64 registers = 26 = 6 bits
OP-code 32 - 1 - 18 - 6 bits = 7 bits
Question 86
Consider a system with 2 level cache. Access times of Level 1, Level 2 cache and main memory are 0.5 ns, 5 ns and 100 ns respectively. The hit rates of Level1 and Level2 caches are 0.7 and 0.8 respectively. What is the average access time of the system ignoring the search time within cache?
A
20.75 ns
B
7.55 ns
C
24.35 ns
D
35.20 ns
Question 86 Explanation: 

Ignoring search time with in the cache statement tells us to use parallel access( simultaneous access ).

By default we consider hierarchical access - because that is the common implementation and simultaneous access cache has great practical difficulty
but in this question they given ignore search time within the cache usually search is applicable for an associative cache but here no such information given. So, may be they are telling to ignore the search time for L1 and just consider the time for L2 for an L1 miss and similarly just consider memory access time for L2 miss. This is nothing but simultaneous access.

Access time for hierarchical access
=h1×t1+ (1-h1) h2(t1+t2) +(1-h1)(1-h2)(t1+t2+tm)

Access time for simultaneous access
=h1×t1+(1−h1)h2×t2+(1−h1)(1−h2)tm
where,
H1 = Hit rate of level 1 cache = 0.7
T1 = Access time for level 1 cache = 0.5 ns
H2 = Hit rate of level 2 cache = 0.8
T2 = Access time for level 2 cache = 5 ns
Hm = Hit rate of Main Memory = 1
Tm = Access time for Main Memory = 100 ns
=h1×t1+(1−h1)h2×t2+(1−h1)(1−h2)tm
= 0.7(0.5) + 0.3(0.8)(5) + 0.3(0.2)(100)
= 7.55 ns
Question 87
Consider a disk pack with 32 surfaces, 64 tracks and 512 sectors per pack. 256 bytes of data are stored in a bit serial manner in a sector. The number of bits required to specify a particular sector in the disk is
A
19
B
20
C
18
D
22
Question 87 Explanation: 
Given,
Number of surfaces = 32(25)
Number of tracks per surface = 64(26)
Number of sectors per track = 512(29)
Number of bytes per sector = 256 bytes(28)
Total number of sectors
= Total number of surfaces  * Number of tracks per surface  * Number of sectors per track
=32 * 64 * 512
=(25) * (26)  * (29)
=5+6+9
=20 bits
To identify each sector uniquely  we require 20 bits
Question 88
Dirty bit is used to show the
A
Page with low frequency occurrence
B
Wrong page
C
Page with corrupted data
D
Page that is modified after being loaded into cache memory
Question 88 Explanation: 
Dirty bit
  • A dirty bit or modified bit is a bit that is associated with a block of computer memory and indicates whether or not the corresponding block of memory has been modified.
  • The dirty bit is set when the processor writes to (modifies) this memory. The bit indicates that its associated block of memory has been modified and has not been saved to storage yet.
  • When a block of memory is to be replaced, its corresponding dirty bit is checked to see if the block needs to be written back to secondary memory before being replaced or if it can simply be removed.
  • Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
  • Example : When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
Question 89
Suppose for a process P, reference to pages in order are 1, 2, 4, 5,2,1,2,4. Assume that main memory can accommodate 3 pages and the main memory has already pages 1 and 2 in the order 1 - first, 2- second. At this moment, assume FIFO Page Replacement Algorithm is used then the number of page faults that occur to complete the execution of process P is
A
6
B
4
C
3
D
5
Question 89 Explanation: 
FIFO Page Replacement Algorithm
Question 90
______________ system call creates new process in Unix.
A
Fork
B
Fork new
C
Create
D
Create new
Question 90 Explanation: 
A new process is created by fork() system call in UNIX. fork() system call returns a process ID which is generally the process id of the child process created.
Question 91
A process residing in main memory and ready and waiting for execution, is kept on
A
Job Queue
B
Execution Queue
C
Wait Queue
D
Ready Queue
Question 91 Explanation: 
A process residing in main memory and ready and waiting for execution, is kept on Ready Queue
process state transition diagram
Question 92
What is the output of the following program segment ?
sum(n)
{
 if ( n < 1 ) 
  return n;
 else 
  return (n + sum(n–1));
}

main()
{
  printf(“%d”, sum(5));
}
A
10
B
16
C
15
D
14
Question 92 Explanation: 
 output of the following program
Question 93
The hit ratio of a Translation Lookaside Buffer (TLAB) is 80%. It takes 20 nanoseconds (ns) to search TLAB and 100 ns to access main memory. The effective memory access time is ______.
A
36 ns
B
140 ns
C
122 ns
D
40 ns
Question 93 Explanation: 
Effective memory access(EMA) = Hit ratio * (TLB access time + Main memory access time) + (1-Hit ratio) * (TLB access time + (L+1) * main memory time)
Where,
L = Number of levels of page table
Miss Rate = (1-Hit ratio)
Note : This formula is valid only when there are no page faults

Given,
Number of levels of page table = 1
TLB access time = 20 ns
Main memory access time = 100 ns
TLB Hit ratio = 80% = 0.8
TLB Miss ratio= 1 – 0.8= 0.2

Effective Memory Access Time
= 0.8 x { 20 ns + 100 ns } + 0.2 x { 20 ns + (1+1) x 100 ns }
= 0.8 x 120 ns + 0.2 + 220 ns
= 96 ns + 44 ns
= 140 ns
Question 94
Assume that an implementation of Unix operating system uses i-nodes to keep track of data blocks allocated to a file. It supports 12 direct block addresses, one indirect block address and one double indirect block address. The file system has 256 bytes block size and 2 bytes for disk block address. The maximum possible size of a file in this system is
A
16 MB
B
16 KB
C
70 KB
D
71 KB
E
None
Question 94 Explanation: 
  • Block size = 256 KB
  • Size of one address = 2Bytes
  • Each indirect block stores 256/2 =128 pointers.
  • Double indirect block can point 128 x 128 blocks
  • Maximum file size = 128 x 128 x 256 = 4194304 B= 4096 KB =4MB
Question 95
At a particular time of computation, the value of a counting semaphore is 10. Then 12 P operations and “x” V operations were performed on this semaphore. If the final value of semaphore is 7, x will be :
A
8
B
9
C
10
D
11
Question 95 Explanation: 
Given,
Counting Semaphore value (S) = 10
P operations = 12
V operations = x
Final Semaphore value (S) = 7

P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.

Initially  value of a counting semaphore  S= 10
After 12 P operation are performed "S" Value became = -2
“x” V operations are performed on this semaphore S
now Final value of counting semaphore S = 7
x + (-2) = 7
x -2 = 7
x = 7+2
x = 9.
Question 96
In a paged memory, the page hit ratio is 0.40. The time required to access a page in secondary memory is equal to 120 ns. The time required to access a page in primary memory is 15 ns. The average time required to access a page is .
A
105
B
68
C
75
D
78
Question 96 Explanation: 
Given,
Page Hit Ratio is = 0.40
Secondary Memory Access Time=120 ns
Primary Memory Access Time=15 ns
Miss Ratio =1 – hit ratio
Average access time= ?

Average access time = hit ratio * primary memory access time + (1 – hit ratio) * secondary memory access time

Average access time = 0.4 * 15 + 0.6 * 120
Average access time = 6 + 72
Average access time = 78.
Question 97
Which UNIX/Linux command is used to make all files and sub-directories in the directory “progs” executable by all users ?
A
Chmod− R a+x progs
B
Chmod −R 222 progs
C
Chmod−X a+x progs
D
Chmod −X 222 progs
Question 97 Explanation: 
chmod− R a+x progs is used to make all files and sub-directories in the directory “progs” executable by all users
Question 98
Which of the following statements are true ?
(a) External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is contiguous.
(b) Memory Fragmentation can be internal as well as external.
(c) One solution to external Fragmentation is compaction.
A
(a) and (b) only
B
(a) and (c) only
C
(b) and (c) only
D
(a), (b) and (c)
Question 98 Explanation: 
Statement(A) : False
External Fragmentation exists where there is enough total memory space to satisfy a request but available space is not contiguous.

Statement(B) : True
Memory Fragmentation can be internal as well as external.

Statement(C) : True
One solution to external Fragmentation is compaction or shuffle memory contents.
Best Fit Block Search is the solution for internal fragmentation
Question 99
Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are .
A
Page Access information
B
Virtual Page number
C
Page Frame number
D
Both virtual page number and Page Frame Number
Question 99 Explanation: 
  • Page information in memory is also called as Page Table
  • A page table entry must contain Page frame number.
  • Virtual page number is typically used as index in page table to get the corresponding page frame number.
Question 100
Consider a virtual page reference string 1, 2, 3, 2, 4, 2, 5, 2, 3, 4. Suppose LRU page replacement algorithm is implemented with 3 page frames in main memory. Then the number of page faults are .
A
5
B
7
C
9
D
10
Question 100 Explanation: 
Given String,
1, 2, 3, 2, 4, 2, 5, 2, 3, 4
LRU page replacement algorithm Total number of page faults are 7
Question 101
If the disk head is located initially at 32, find the number of disk moves required with FCFS if the disk queue of I/O blocks requests are 98, 37, 14,124, 65, 67.
A
239
B
310
C
321
D
325
Question 101 Explanation: 
FCFS disk scheduling
Question 102
Given memory partitions of 100 K, 500 K, 200 K, 300 K and 600 K (in order) and processes of 212 K, 417 K,112 K, and 426 K (in order), using the first-fit algorithm, in which partition would the process requiring 426 K be placed ?
A
500 K
B
200 K
C
300 K
D
600 K
E
None
Question 102 Explanation: 
first fit algorithm
Question 103
What is the size of the Unicode character in Windows Operating System ?
A
8-Bits
B
16-Bits
C
32-Bits
D
64-Bits
Question 103 Explanation: 
  • Unicode-enabled functions are described in Conventions for Function Prototypes.
  • These functions use UTF-16 (wide character) encoding, which is the most common encoding of Unicode and the one used for native Unicode encoding on Windows operating systems.
  • Each code value is 16 bits wide, in contrast to the older code page approach to character and string data, which uses 8-bit code values.
  • The use of 16 bits allows the direct encoding of 65,536 characters.
  • In fact, the universe of symbols used to transcribe human languages is even larger than that, and UTF-16 code points in the range U+D800 through U+DFFF are used to form surrogate pairs, which constitute 32-bit encodings of supplementary characters
Question 104
The problem of indefinite blockage of low-priority jobs in general priority scheduling algorithm can be solved using :
A
Parity bit
B
Aging
C
Compaction
D
Timer
Question 104 Explanation: 
  • In Priority scheduling technique, we assign some priority to every process we have and based on that priority, the CPU will be allocated and the process will be executed. Here, the CPU will be allocated to the process that is having the highest priority
  • higher priority processes keep on coming and the CPU is allocated to that higher priority processes and the low priority process keeps on waiting for its turn is called Starvation
  • To avoid starvation, we use the concept called Aging.
  • In Aging, after some fixed amount of time quantum, we increase the priority of the low priority processes. By doing so, as time passes, the lower priority process becomes a higher priority process.
Question 105
Which of the following memory allocation scheme suffers from external fragmentation ?
A
Segmentation
B
Pure demand paging
C
Swapping
D
Paging
Question 105 Explanation: 
  • Segmentation is free of internal fragmentation but Suffers from external fragmentation.
  • In Paging There is no external fragmentation but internal fragmentation exists.
  • Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory. Segmentation
Question 106
In UNIX, which of the following command is used to set the task priority ?
A
Init
B
Nice
C
Kill
D
PS
Question 106 Explanation: 
Question 107
The process of assigning load addresses to the various parts of the program and adjusting the code and data in the program to reflect the assigned addresses is called _______.
A
Symbol resolution
B
Parsing
C
Assembly
D
Relocation
Question 107 Explanation: 
  • Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses
  • Relocation is typically done by the linker at link time, but it can also be done at load time by a relocating loader, or at run time by the running program itself.
  • Some architectures avoid relocation entirely by deferring address assignment to run time; this is known as zero address arithmetic.
  • A linker usually performs relocation in conjunction with symbol resolution, the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program.
Question 108
The virtual address generated by a CPU is 32 bits. The Translation Lookaside Buffer (TLB) can hold total 64 page table entries and a 4-way set associative (i.e. with 4-cache lines in the set). The page size is 4 KB The minimum size of TLB tag is
A
12 bits
B
15 bits
C
16 bits
D
20 bits
Question 108 Explanation: 
Virtual Address = 32 bits
Page size = 4 KB = 212 Bytes
Number of bits needed to address the page frame = 32-12=20 bits
Translation Look-aside Buffer (TLB) can hold 64 page table entries with 4-way set associative
Number of sets = 64/4 = 16(16=24 bits.)
Therefore 4 bits are needed to address a set
So Tag bits = 20 - 4 = 16 bits.
Question 109
Consider a disk queue with request for input/output to block on cylinders 98, 183, 37, 122, 14, 124, 65, 67 in that order. Assume that disk head is initially positioned at cylinder 53 and moving towards cylinder number 0. The total number of head movements using Shortest Seek Time First (SSTF) and SCAN algorithms are respectively
A
236 and 252 cylinders
B
640 and 236 cylinders
C
235 and 640 cylinders
D
235 and 252 cylinders
E
None
Question 109 Explanation: 
hortest Seek Time First (SSTF) and SCAN algorithms
Question 110
How much space will be required to store the bitmap of a 1.3 GB disk with 512 bytes block size ?
A
332.8 KB
B
83.6 KB
C
266.2 KB
D
256.6 KB
Question 110 Explanation: 
Disk Size: 1.3 G bytes = 1.3 * 230 bytes
Block Size: 512 bytes = 29 bytes

Number of Blocks = disk size / block size
= 1.3 * 230 bytes / 29 bytes = 1.3 * 221

In Bitmap We need a bit for each block
Number of entries in bit table = Number of blocks
= 1.3 * 221

Size of bit table in bits
= 1.3 * 221 bits

Size of bit table in bytes
= size in bits / (8 bits/byte)
= 1.3 * 221 / 8 bytes
= 1.3 * 221 / 23 bytes
= 1.3 * 218 bytes
= 1.3 * 28 * 210 bytes
= 1.3 * 256 * K bytes
= 332.8 K bytes
= 332.8K bytes
Question 111
Linux operating system uses
A
Affinity Scheduling
B
Fair Preemptive Scheduling
C
Hand Shaking
D
Highest Penalty Ratio Next
Question 112
Preemptive scheduling is the strategy of temporarily suspending a gunning process
A
Before the CPU time slice expires
B
To allow starving processes to run
C
When it requests I/O
D
To avoid collision
Question 112 Explanation: 
  • In preemptive scheduling tasks are usually assigned with priorities.
  • The tasks are prioritized and the task with the highest priority among all other tasks gets the CPU time
  • If a task with a priority higher than the currently executing task becomes ready to run, the kernel saves the context of the current task and switches to the higher priority task by loading its context.
  • Usually, the highest priority task runs to completion or until it becomes non-computable
  • Therefore, the running task is interrupted for some time and resumed later when the priority task has finished its execution. This is called preemptive scheduling.
  • In non-preemptive scheduling, a running task is executed till completion. It cannot be interrupted.
Question 113
In round robin CPU scheduling as time quantum is increased the average turnaround time
A
Increases
B
Decreases
C
Remains constant
D
Varies irregularly
Question 113 Explanation: 
 ROUND ROBIN SCHEDULING
Question 114
Resources are allocated to the process on non-sharable basis is
A
Mutual exclusion
B
Hold and wait
C
No pre-emption
D
Circular wait
Question 114 Explanation: 
deadlock
Question 115
Consider the following page trace :
4,3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5
Percentage of page fault that would occur if FIFO page replacement algorithm is used with number of frames for the JOB m = 4 will be
A
8
B
9
C
10
D
12
Question 115 Explanation: 
 FIFO page replacement algorithm
Question 116
Dijkstra's banking algorithm in an operating system, solves the problem of
A
Deadlock avoidance
B
Deadlock recovery
C
Mutual exclusion
D
Context switching
Question 116 Explanation: 
  • Dijikstra banking algorithm in an operating system, solves the problem of Deadlock avoidance
  • The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra
  • Banker's algorithm tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
  • In Banker's algorithm we have 4 data structures are used
    1) Available
    2) Max
    3) Allocation
    4) Need
Question 117
Consider a logical address space of 8 pages of 1024 words mapped with memory of 32 frames. How many bits are there in the physical address ?
A
9 bits
B
11 bits
C
13 bits
D
15 bits
Question 117 Explanation: 

****We know that page and frame both have the same size***

Physical Memory have 32 frames = 25
Size of word 1024 = 210
Size of physical address space = Number of frames × Frame size
Size of physical address space =25× 210= 215
»» Number of required bits in the physical address =15

Logical Memory have 8 pages = 23
Size of word = 210
Size of logical address space = 2m
= Number of pages × page size
= 23 × 210 = 213
»» m=13 bit
»» Number of required bits in the logical address = 13 bits
logical address space
Question 118
Determine the number of page faults when references to pages occur in the occur - 1, 2, 4, 5, 2, 1, 2, 4. Assume that the main memory can accommodate 3 pages and the main memory already has the pages 1 and 2, with page 1 having been brought earlier than page 2. (Assume LRU algorithm is used)
A
3
B
5
C
4
D
None of the above
Question 118 Explanation: 
Number page faults = 4
Question 119
The only state transition that is initiated by the user process itself is
A
Block
B
Dispatch
C
Wakeup
D
None of the above
Question 119 Explanation: 
The only state transition that is initiated by the user process itself is block. Whenever a user process initiates an I/O request it goes into block state unless and until the I/O request is not completed.
Question 120
Object modules generated by assembler that contains unresolved external references are resolved for two or more object module by a/an
A
Operating system
B
Loader
C
Linker
D
Compiler
Question 120 Explanation: 
  • In a single file program, while producing the object file, all references to labels are replaced by their corresponding addresses by the assembler.
  • Incase of multi-file program, if there are any references to labels defined in another file, the assembler marks these references as "unresolved".
  • When these object files are passed to the linker, the linker determines the values for these references from the other object files, and patches the code with the correct values.
Question 121
A special software that is used to create a job queue is called
A
Drive
B
Spooler
C
Interpreter
D
Linkage editor
Question 121 Explanation: 
  • Spooler is a software which works on creating a typical request queue where data, instructions and processes from multiple sources are accumulated for execution later on.
  • Spooler is maintained on computer’s physical memory, buffers or the I/O device-specific interrupts.
  • The spool is processed in FIFO manner i.e. whatever first instruction is there in the queue will be popped and executed.
  • How does printer spooling happen ? Whichever computer you printed from will handle all of the spooling.
  • However, if you’re using a printer that’s shared over your network, the network server may handle it.
  • A “spooler program” is what manages all of the print jobs in queue. It’ll send the line of documents in the order they were received to the printer (when available).
  • It may seem a bit frustrating at first, but this functionality is actually designed to improve speed and efficiency, even though you may commonly see this term when your printer isn’t doing anything at all.
Question 122
Which of the following can be accessed by transfer vector approach of linking ?
A
External data segments
B
External subroutine
C
Data located in other procedure
D
All of the above
Question 122 Explanation: 
External subroutines are routines that are created and maintained separately from the program that will be calling them.
Question 123
A relationship between processes such that each has some part (critical section) which must not be executed while the critical section of another is being executed, is known as
A
Semaphore
B
Mutual exclusion
C
Multiprogramming
D
Message passing
Question 123 Explanation: 
deadlock
Question 124
How many states can a process be in ?
A
3
B
4
C
2
D
5
Question 124 Explanation: 
They are 5 states in Process States State Diagram
1) New
2) Ready
3) Running
4) Waiting
5) Terminated
Process States State Diagram
The minimum number of states is 5 when the process from its creation to completion, passes through various states.
Question 125
Virtual memory is
A
Related to virtual reality
B
A form of ROM
C
A form of RAM
D
None of the above
Question 125 Explanation: 
virtual memory
Question 126
A virtual memory based memory management algorithm partially swaps out a process. This is an example of
A
Short term scheduling
B
Long term scheduling
C
Medium term scheduling
D
Mutual exclusion
Question 126 Explanation: 
medium term scheduling
Question 127
Assuming that the disk head is located initially at 32, find the number of disk moves required with FCFS if the disk queue of I/O block requests are
98, 37, 14, 124, 65, 67 :
A
310
B
324
C
320
D
321
Question 127 Explanation: 
FCFS disk scheduling
Question 128
Let the page fault service time be 10 millisecond(ms) in a computer with average memory access time being 20 nanosecond(ns). If one page fault is generated for every 106 memory accesses, what is the effective access time for memory ?
A
21 ns
B
23 ns
C
30 ns
D
35 ns
Question 129
Consider the following UNIX command :
sort <in> temp; head – 30 <temp; rm temp
Which of the following functions shall be performed by this command ?
A
Sort, taking the input from“temp”, prints 30 lines from temp and delete the file temp
B
Sort the file “temp”, removes 30 lines from temp and delete the file temp
C
Sort, taking the input from “in” and writing the output to“temp” then prints 30 lines from temp on terminal. Finally “temp” is removed.
D
Sort, taking the input from “temp” and then prints 30 lines from “temp” on terminal. Finally “temp” is removed
Question 129 Explanation: 
sort <in> temp; head - 30 <temp; rm temp
  • Head command prints the first N number of data of the given input. By default, it prints first 10 lines of each given file.
    head-30 means so 30 lines are to be read
  • sort sorts the contents of a text file, line by line.
     sort <in> temp means Sort, taking the input from "in" and writing the output to "temp"
  • rm command removes a file
    rm temp means temp is removed
  • The symbol > used to redirection of output and symbol < used to redirect the input
Question 130
The ‘mv’ command changes
A
The inode
B
The inode-number
C
The directory entry
D
Both the directory entry and the inode
Question 130 Explanation: 
mv command in linux
Question 131
Concurrent processes are processes that
A
Do not overlap in time
B
Overlap in time
C
Are executed by a processor at the same time
D
None of the above
Question 131 Explanation: 
  • Processes that interact with each by cooperating to achieve a common goal are called concurrent processes.
  • Concurrent processes are processes that share the CPU and memory. They do overlap in time while execution. At a time CPU entertain only one process but it can switch to other process without completing it as a whole.
Question 132
Object code is the output of ______.
A
Operating System
B
Compiler or Assembler
C
Only Assembler
D
Only compiler
Question 132 Explanation: 
  • Source code is  programming statements that are created by a programmer with a text editor or a visual programming tool and then saved in a file.
  • Object code generally refers to the output, a compiled file, which is produced when the Source Code is compiled with a C compiler.
  • The object code file contains a sequence of machine-readable instructions that is processed by the CPU in a computer.
Question 133
WINDOWS is a _________ operating.
A
Real time
B
Multi-user
C
Preemptive
D
Non-preemptive
Question 133 Explanation: 
Windows is a multi user system. In task manager we can see processes belonging to multiple users and It is also preemptive.

WINDOWS is specified, no specific version or anything is given. If you consider windows 3.x, it was cooperative and non-preemptive. But windows 95,NT and 2000 are preemptive. Windows 3.x was not multi-user. But windows NT,2000 are multi-user. Again windows 3.x cannot be categorized as a real time operating system also. Since the question itself has got some ambiguity
Question 134
Page making process from main memory to disk is called
A
Interruption
B
Termination
C
Swapping
D
None of the above
Question 134 Explanation: 
Swapping is one of the memory management techniques used by the operating system. Since the size of the RAM is limited and finite, all the processes or programs to be executed cannot be made to fit in it. So the disk is also treated as an extension of the memory and is referred to as virtual memory. The page making process from main memory to disk is called swapping.
Question 135
A Deadlock in an Operating System Is
A
Desirable process
B
Undesirable process
C
Definite waiting process
D
All of the above
Question 135 Explanation: 
  • Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.
  • Deadlock is an undesirable condition but is a definite waiting process
Question 136
In an absolute loading scheme, which loader function is accomplished by a loader ?
A
Re-allocation
B
Allocation
C
Linking
D
Loading
Question 136 Explanation: 
  • In an absolute loading scheme, Loading function is accomplished by the loader
  • Loader loads the executable code into memory, program and data stack are created and registers will initialized
  • Reallocation change relocatable address into absolute address
Question 137
________ synchronizes critical resources to prevent deadlock.
A
P-operator
B
V-operator
C
Semaphore
D
Swapping
Question 137 Explanation: 
operations on Semaphores

Note:
P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.

Question 138
_______ is one of preemptive scheduling algorithm.
A
RR
B
SSN
C
SSF
D
Priority based
Question 138 Explanation: 
  • Round Robin(RR) is the preemptive process scheduling algorithm.
  • Shortest Job Next(SJN) is a non-preemptive, pre-emptive scheduling algorithm
  • First Come First Serve (FCFS) is a non-preemptive, pre-emptive scheduling algorithm.
  • Priority Based Scheduling is a non-preemptive algorithm and also preemptive algorithm
  • Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
Question 139
In order to allow only one process to enter its critical section, binary semaphore are initialized to
A
0
B
1
C
2
D
3
Question 139 Explanation: 
Initially Semaphore S value=1
In order to enter 1 process into critical section we perform wait operation or P operation or down operation that is decrement S value by 1 now s value =0 so that no other process is able to down ,further the value of semaphore

Example :
Semaphore S ; // initialized to 1
do
{
   wait(S) // Refer note    
       // critical section
   signal(S)
     // remaining section
}
while (TRUE)
Note :
wait(S) // Every process check S value hear enters into critical section only if S Value > 0
Question 140
Remote Computing Service involves the use of time sharing and _______.
A
Multi-processing
B
Interactive processing
C
Batch processing
D
Real-time processing
Question 140 Explanation: 
  • Remote Computing Service involves the use of time sharing and batch processing
  • Remote time-sharing and remote batch processing are two modes of computer access, offer some advantages for small and remotely located institutions.
  • These include less expense and less long-term commitment than owning an equivalent on-campus facility, less management and administrative supervision and better quality and variety of services than a small institution could manage on a dedicated basis.
Question 141
With a four programs in memory and with 80% average I/O wait, the CPU utilization is ?
A
60%
B
70%
C
90%
D
100%
Question 141 Explanation: 
Given,
I/O wait = 80%
Total Programs in Memory=4
CPU utilization can be modeled by the below formula :
CPU utilization = 1 - pn
Where
n = Number of process
P = Time waiting for I/O
CPU utilization = 1 - pn
CPU utilization = 1-0.84=60%

Notes
  • Let P be the fraction of time that a process spends away from the CPU.
  • If there is one process in memory, the CPU utilization is (1-P).
  • If there are N processes in memory, the probability of N processes waiting for an I/O is P*P...*P (N times).
  • The CPU utilization is ( 1 - PN ) where N is called the multiprogramming level (MPL) or the degree of multiprogramming.
  • As N increases, the CPU utilization increases.
  • While this equation indicates that a CPU continues to work more efficiently as more and more processes are added, logically, this cannot be true.
  • Once the system passes the point of optimal CPU utilization, it thrashes.
Question 142
Assume N segments in memory and a page size of P bytes. The wastage on account of internal fragmentation is :
A
NP/2 bytes
B
P/2 Bytes
C
N/2 Bytes
D
NP Bytes
Question 142 Explanation: 
Given,
Total Segments = N
Page size = P bytes
Wastage on account of internal fragmentation = (N*P) / 2 bytes
Question 143
Assertion (A) :  Bit maps are not often used in memory management.
Reason (R) : Searching a bitmap for a run of given length is a slow operation.
A
Both (A) and (R) are true and (R) is correct explanation for (A)
B
Both (A) and (R) are true but (R) is not correct explanation for (A)
C
(A) is true (R) is false
D
(A) is false (R) is true
Question 143 Explanation: 
Bit maps are not often used in memory management because Searching a bitmap for a run of given length is a slow operation.

The main problem with a bit map memory scheme is when we need to allocate memory to a process. Assume the allocation size is 4 bytes. If a process requests 256 bytes of memory, we must search the bit map for 64 consecutive zeroes. This is a slow operation and for this reason bit maps are not often used.

Refer : http://www.cs.nott.ac.uk/~pszgxk/courses/g53ops/Memory%20Management/MM06-linkedlistandbitmaps.html#:~:text=The%20other%20problem%20with%20a,maps%20are%20not%20often%20used.
Question 144
Suppose it takes 100 ns to access a page table and 20 ns to access associative memory with a 90% hit rate, the average access time equals :
A
20 ns
B
28 ns
C
90 ns
D
100 ns
Question 144 Explanation: 
Given data,
Page table access time = 100 ns
Associative memory access time = 20 ns
Hit rate = 90% = 0.9
Miss ratio = (1-Hit ratio)=1-0.9=0.1
Access time = Hit ratio * Associative memory access time + Miss ratio * Page Table access time
Access time= 0.9* 20 + (1-0.9)*100
Access time =28ns
Question 145
The linker :
A
Is similar to interpreter
B
Uses source code as its input
C
Is required to create a load module
D
None of the above
Question 145 Explanation: 
Linker is a program that takes one or more object files generated by a compiler and combines them into a single executable file, library file, or another object file.

Notes :
linker and loader
Question 146
Variable partition memory management technique with compaction results in :
A
Reduction of fragmentation
B
Minimal wastage
C
Segment sharing
D
None of the above
Question 146 Explanation: 
  • One way to remove external fragmentation is compaction. When dynamic partitioning is used for memory allocation then external fragmentation can be reduced by merging all the free memory together in one large block.
  • This technique is also called defragmentation. This larger block of memory is then used for allocating space according to the needs of the new processes.
Question 147
In the process management Round-robin method is essentially the preemptive version of _________
A
FILO
B
FIFO
C
SSF
D
Longest time first
Question 147 Explanation: 
  • In Round robin scheduling , a queue is maintained which stores the input as it comes in FIFO order
  • Round robin scheduling is a preemptive version of first-come, first-served scheduling.
  • Processes are dispatched in a first-in-first-out sequence but each process is allowed to run for only a limited amount of time.
  • This time interval is known as a time-slice or quantum.
  • If a process does not complete or get blocked because of an I/O operation within the time slice, the time slice expires and the process is preempted.
  • This preempted process is placed at the back of the run queue where it must wait for all the processes that were already in the queue to cycle through the CPU.
Question 148
A page fault
A
Is an error specific page.
B
Is an access to the page not currently in memory.
C
Occur when a page program occur in a page memory.
D
Page used in the previous page reference.
Question 148 Explanation: 
Page fault
When we want to load the page on the memory, and the page is not already on memory, then it is called a page fault. The page fault is also called page miss.

Page Hit
When we want to load the page on the memory, and the page is already available on memory, then it is called page hit.

Cache Hit
Cache Memory is a small memory that operates at a faster speed than physical memory and we always go to cache before we go to physical memory. If we are able to locate the corresponding word in cache memory inside the cache, its called cache hit and we don't even need to go to the physical memory.

Cache Miss
It is only after when mapping to cache memory is unable to find the corresponding block(block similar to physical memory page frame) of memory inside cache ( called cache miss ), then we go to physical memory and do all that process of going through page table or TLB.


NOTES :
page hit and page fault and TLB miss and TLB HIT
Question 149
A semaphore count of negative n means (s = – n) that the queue contains ________ waiting processes.
A
N + 1
B
N
C
N – 1
D
0
Question 149 Explanation: 
A Semaphore count of negative n means(s=-n) that the queue contains n waiting processes and Nonnegative count means that the queue of waiting processes is empty .

Notes :
counting semaphore
Question 150
A program is located in the smallest available hole in the memory is _________
A
Best – fit
B
First – bit
C
Worst – fit
D
Buddy
Question 150 Explanation: 
best fit and worst fit
Question 151
The Unix command used to find out the number of characters in a file is
A
Nc
B
Wc
C
Chcnt
D
Lc
Question 151 Explanation: 
wc (word count) command in Unix/Linux  is used to find out number of newline count, word count, byte and characters count in a files specified by the file arguments.

Syntax of wc command
wc [options] filenames

Below are the options and usage 
wc -l     : Prints the number of lines in a file.
wc -w   : prints the number of words in a file.
wc -c    : Displays the count of bytes in a file.
wc -m  : prints the count of characters from a file.
wc -L   : prints only the length of the longest line in a file.
Question 152
Suppose it takes 100 ns to access page table and 20 ns to access associative memory. If the average access time is 28 ns, the corresponding hit rate is :
A
100 percent
B
90 percent
C
80 percent
D
70 percent
Question 152 Explanation: 
Given data,
Page table access time = 100 ns
Associative memory access time = 20 ns
Average Access time = 28 ns
Hit rate = x
Miss ratio = (1-Hit ratio)=1-x

Average Access time = Hit ratio * Associative memory access time + Miss ratio * Page Table access time

28ns = x*20ns + (1-x)*100ns
28 = 20x + 100-100x
28 = 100-80x
80x= 100-28
x=72/80=0.9
Hit ratio = 90%=0.9
Question 153
N processes are waiting for I/O. A process spends a fraction p of its time in I/O wait state. The CPU utilization is given by :
A
1-p-N
B
1-pN
C
PN
D
P-N
Question 153 Explanation: 
  • Let P be the fraction of time that a process spends away from the CPU.
  • If there is one process in memory, the CPU utilization is (1-P).
  • If there are N processes in memory, the probability of N processes waiting for an I/O is P*P...*P (N times).
  • The CPU utilization is ( 1 - PN ) where N is called the multiprogramming level (MPL) or the degree of multiprogramming.
  • As N increases, the CPU utilization increases.
  • While this equation indicates that a CPU continues to work more efficiently as more and more processes are added, logically, this cannot be true.
  • Once the system passes the point of optimal CPU utilization, it thrashes.
Example
With a four programs in memory and with 80% average I/O wait, the CPU utilization is ?
Given,
I/O wait = 80%
Total Programs in Memory=4
CPU utilization can be modeled by the below formula :
CPU utilization = 1 - pn
Where
n = Number of process
P = Time waiting for I/O
CPU utilization = 1 - pn
CPU utilization = 1-0.84=60%
Question 154
An example of a non-preemptive scheduling algorithm is :
A
Round Robin
B
Priority Scheduling
C
Shortest job first
D
2 level scheduling
Question 154 Explanation: 
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time First (SRTF), Priority (preemptive version)

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non preemptive) and Priority (non preemptive version)

There are basically two types of SJF methods
1) Non-Preemptive SJF and
2) Preemptive SJF.

In non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds it till it reaches a waiting state or terminated.

In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. Although a process with short burst time begins, the current process is removed or preempted from execution, and the job which is shorter is executed 1st.

SJF is frequently used for long term scheduling.
Question 155
An example of a distributed OS is :
A
Amoeba
B
UNIX
C
MS-DOS
D
MULTICS
Question 155 Explanation: 
Distributed operating systems differ from network operating systems in supporting a transparent view of the entire network, in which users normally do not distinguish local resources from remote resources.

List of Distributed Operating Systems :
  • AEGIS
    OS for the Apollo DOMAIN Distributed system. Early 1980s.
  • AMOEBA
    A distributed OS based partly on UNIX. Based on passive data objects protected by encrypted capabilities. 1980s [Tanenbaum & Mullender 1981, Mullender & Tanenbaum 1986].
  • Arachne
    A distributed operating system developed at the U. of Wisconsin. Late 1970s [Finkel 1980].
  • Charlotte
    Distributed OS for the Crystal Multicomputer project at the U. of Wisconsin. Explores coarse-grained parallelism without shared memory for computationally intensive tasks. 1980s [Finkel et al 1989].
  • CHOICES
    Distributed, object-oriented OS featuring a high degree of customization. U. of Idaho, 1990s [Campbell et al 1993].
  • Clouds
    A distributed object-based operating system developed at Georgia Institute of Technology. Early 1990s. [DasGupta 1991]
  • CMDS
    The Cambridge Model Distributed System. U. of Cambridge (England). Late 1970s [Wilkes & Needham 1980].
  • CONDOR
    A distributed OS described as a "hunter of idle workstations," which distributes large computationally intensive jobs among available processors in a workstation pool. U. Wisconsin at Madison, 1980s [Litzkow 1988].
  • Cronus
    Object-oriented distributed computing system for heterogenous environments. BBN Systems, 1980s [Schantz et al 1986].
  • DEMOS/MP
    A distributed version of the DEMOS operating system. Message-based, featuring process migration. U.C. Berkeley, early 1980s [Miller et al 1984].
  • DISTOS
    A Distributed OS for a network of 68000s.
  • DISTRIX
    Message-based distributed version of Unix. Early 1980s.
  • DUNIX
    A distributed version of UNIX developed at Bell Labs. late 1980s [xxx 1988].
  • Eden
    A distributed object-oriented OS at the U. of Washington, based on an integrated distributed network of bit-mapped workstations. Capability-based. Early 1980s [Almes et al 1985].
  • Galaxy
    A distributed UNIX-compatible system featuring multi-level IPC and variable-weight processes. Univ. of Tokyo, late 1980s [Sinha et al 1991].
  • LOCUS
    Distributed OS based on UNIX. Mid 1980s. [Popek & Walker, 1985].
  • MDX
  • MICROS
    Distributed OS for MICRONET, a reconfigurable network computer. Late 1970s [Wittie & van Tilborg 1980].
  • MOS
    An early version of MOSIX. Controls four linked PDP-11s. Mid 1980s [Barak & Litman 1985].
  • MOSIX
    A distributed version of UNIX supporting full transparency and dynamic process migration for load balancing. Developed at the Hebrew U. of Jersusalem. Mid 1980's to 1990's [Barak et al 1993].
  • Newark
    Early version of Eden developed for the VAX environment. The name was chosen because it was "far from Eden."
  • NSMOS
    A version of MOSIX for National Semiconductor VR32 systems. late 1980's [Barel 1987].
  • Plan9
    Distributed UNIX-like system developed at Bell Labs by the originators of UNIX. Features per-process name-spaces, allowing each process a customized view of the resources in the system. 1990s [Pike et al 1995].
  • REPOS
    Operating System for small PDP-11's attached to a host computer. Late 1970s [Maegaard & Andreasan 1979].
  • RIG
    Rochester Intelligent Gateway. Network OS developed at the University of Rochester. Influenced Accent and Mach. Early 1970s [Ball et al 1976].
  • Roscoe
    Distributed OS for multiple identical processors (LSI-11s). University of Wisconsin, Late 1970s [Solomon & Finkel 1979].
  • Saguaro
    Distributed OS at the U. of Arizona, supporting varying degrees of transparency. Mid 1980s [Andrews et al 1987].
  • SODA
    A Simplified OS for Distributed Applications. Mid 1980s [Kepecs & Solomon 1985].
  • SODS/OS
    OS for a Distributed System developed on the IBM Series/1 at the U. of Delaware. Late 1970s [Sincoskie & Farber 1980].
  • Spring
    Distributed multiplatform OS developed by Sun. Not related to the Spring Kernel, a real-time system. 1990s [Mitchell et al 1994].
  • Uniflex
    Multitasking, multiprocessing OS for the 68000 family. Technical Systems Consultants. Early 1980s [Mini-Micro 1986].
  • V
    Experimental Distributed OS linking powerful bit-mapped workstations at Stanford U. Early 1980s [Cheriton 1984, Berglund 1986].
Question 156
Which of the following OS treats hardware as a file system ?
A
UNIX
B
DOS
C
Windows NT
D
None of the above
Question 156 Explanation: 
  • File system controls how data is stored and retrieved. Unix treats everything (including hardware devices) as a file.
  • All the files in a Unix system are organized as a inverted tree-like structure. This hierarchical structure in which all the files are stored is referred to as a file system.
  • One of the most important of these is probably the mantra: “everything is a file”, widely regarded as one of the defining points of UNIX.
  • This key design principle consists of providing a unified paradigm for accessing a wide range of input/output resources: documents, directories, hard-drives, CD-Roms, modems, keyboards, printers, monitors, terminals and even some inter-process and network communications.
  • The trick is to provide a common abstraction for all of these resources, each of which the UNIX fathers called a “file”.
Refer :
https://web.archive.org/web/20120320050159/http://ph7spot.com/musings/in-unix-everything-is-a-file
https://en.wikipedia.org/wiki/Everything_is_a_file
Question 157
In which of the following, ready to execute processes must be present in RAM ?
A
Multiprocessing
B
Multiprogramming
C
Multitasking
D
In all of the above
Question 157 Explanation: 
Ready queue
  • This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue.
  • A long term scheduler is responsible to manage the ready queue: a queue which contains runnable processes. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.
Refer : https://stackoverflow.com/questions/32550593/where-is-ready-queue-located-in-a-computer-system-main-memory-secondary-memor
Question 158
If the executing program size is greater than the existing RAM of a computer, it is still possible to execute the program if the OS supports :
A
Multitasking
B
Virtual memory
C
Paging system
D
None of the above
Question 158 Explanation: 
Virtual memory concept
Question 159
Files that are related to input/output and are used to model serial I/O devices such as terminals, printers and networks are called :
A
Regular files
B
Character special files
C
Directories
D
Block special files
Question 159 Explanation: 
Files are used to store information on various storage media such as magnetic disks.

Types of files

Regular Files
Regular files are either ; ASCII files or binary files ; ASCII Files are also known as text files.

Directories
Directories are system files for maintaining the structure of the file system. In windows they are also called, as folders

Character Special File
These are system files. They are related to Input / Output and used to model serial I / O devices such as terminals, printers and networks.

Block special Files
This files are used to model disks.
A block special file acts as a direct interface to a block device. A block device is any device which performs data I/O in units of blocks.
Question 160
An example of a non-preemptive CPU scheduling algorithm is :
A
Shortest job first scheduling.
B
Round robin scheduling.
C
Priority scheduling.
D
Fair share scheduling.
Question 160 Explanation: 
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time First (SRTF), Priority (preemptive version)

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non preemptive) and Priority (non preemptive version)

There are basically two types of SJF methods
1) Non-Preemptive SJF and
2) Preemptive SJF.

In non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds it till it reaches a waiting state or terminated.

In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. Although a process with short burst time begins, the current process is removed or preempted from execution, and the job which is shorter is executed 1st.

SJF is frequently used for long term scheduling.
Question 161
There are ‘n’ processes in memory. A process spends a fraction ‘p’ of its time waiting for I/O to complete. The CPU utilization is given by :
A
Pn
B
1-pn
C
(1-p)n
D
1-n p
Question 161 Explanation: 
  • Let P be the fraction of time that a process spends away from the CPU.
  • If there is one process in memory, the CPU utilization is (1-P).
  • If there are N processes in memory, the probability of N processes waiting for an I/O is P*P...*P (N times).
  • The CPU utilization is ( 1 - PN ) where N is called the multiprogramming level (MPL) or the degree of multiprogramming.
  • As N increases, the CPU utilization increases.
  • While this equation indicates that a CPU continues to work more efficiently as more and more processes are added, logically, this cannot be true.
  • Once the system passes the point of optimal CPU utilization, it thrashes.
Example
With a four programs in memory and with 80% average I/O wait, the CPU utilization is ?
Given,
I/O wait = 80%
Total Programs in Memory=4
CPU utilization can be modeled by the below formula :
CPU utilization = 1 - pn
Where
n = Number of process
P = Time waiting for I/O
CPU utilization = 1 - pn
CPU utilization = 1-0.84=60%
Question 162
An example of a memory management system call in UNIX is :
A
Fork.
B
Mmap.
C
Sigaction.
D
Execve.
Question 162 Explanation: 
Question 163
With 64 bit virtual addresses, a 4KB page and 256 MB of RAM, an inverted page table requires :
A
8192 entries.
B
16384 entries.
C
32768 entries.
D
65536 entries.
Question 163 Explanation: 
Given data,
Virtual addresses size= 64 bit
Page size = 4KB = 212
Ram size = 256 MB = 228
Inverted page table entries = ?

Inverted page table entries = Size of ram / Page size

Inverted page table entries = 228 / 212
Inverted page table entries = 216 = 65,536

Note:
Inverted page tables have 1 entry / physical frame of memory
Question 164
A computer has 6 tape drives with ‘n’ processes competing for them. Each process may need two drives. For which values of ‘n’ is the system deadlock free ?
A
1
B
2
C
3
D
6
E
None
Question 164 Explanation: 
Given tape drive = 6
Each process need 2 drive.
In order to get maximum number of process for the system to be deadlock free just give each process one drive
deadlock free condition
but in this case there will definitely deadlock occur because every process takes 1 drive and waiting for another drive which is hold by other process, So, to break this deadlock just reduce 1 process
So, therefore maximum value of n = 6 – 1 = 5.
Question 165
A computer has 16 pages of virtual address space but the size of main memory is only four frames. Initially the memory is empty. A program references the virtual pages in the order
0, 2, 4, 5, 2, 4, 3, 11, 2, 10.
How many page faults occur if LRU page replacement algorithm is used?
A
3
B
5
C
7
D
8
Question 165 Explanation: 
LRU page replacement algorithm
Question 166
Consider a system where each file is associated with a 16-bit number. For each file, each user should have the read and write capability. How much memory is needed to store each user’s access data?
A
16 KB
B
32 KB
C
64 KB
D
128 KB
Question 166 Explanation: 
In the question they specified each user has read and write capability to the file

So, There are 4 possible combinations of read(R) write(W). read, write, no read, no write.

In-order to represent this 4 combinations at least we require 2 bits ( 22=4 )

Each file is associated with a 16 bit number. this means that number of files =216

So total memory to require to access the data = 216 * 2= 128 KB = 217

Converting above bits into bytes we get 128Kb/(8bits/Byte)

=217/8=217/23=214=16KB
Question 167
What are the minimum number of resources required to ensure that deadlock will never occur, if there are currently three processes P1, P2 and P3 running in a system whose maximum demand for the resources of the same type are 3, 4, and 5 respectively?
A
3
B
7
C
9
D
10
Question 167 Explanation: 
Given data,
Number of process = 3
Maximum resource demand for each process
P1=3
P2=4
P3=5
In order to get the minimum number of resources just assign one resource less than their completion(maximum need)
P1=2
P2=3
P3=4
Total allocated resources = 2+3+4 = 9
This will be maximum number of resources that may cause deadlock
So, now add one more resource it will ensure that any one of these process will execute and release all its resources
Hence Minimum number of resources to ensure deadlock will not occur = 9+1 = 10.

Notes :
Step1 : Allocate the resources to the processes one less than the maximum need. i.e, allocated [Max need - 1] to every process.
Step 2 :Add one more resource at the end to ensure that at least one of the process can fulfill his max need. This will be minimum number of resources that are required for deadlock free system.
Question 168
Dirty bit is used to indicate which of the following?
A
A page fault has occurred
B
A page has corrupted data
C
A page has been modified after being loaded into cache
D
An illegal access of page
Question 168 Explanation: 
Dirty bit
  • A dirty bit or modified bit is a bit that is associated with a block of computer memory and indicates whether or not the corresponding block of memory has been modified.
  • The dirty bit is set when the processor writes to (modifies) this memory. The bit indicates that its associated block of memory has been modified and has not been saved to storage yet.
  • When a block of memory is to be replaced, its corresponding dirty bit is checked to see if the block needs to be written back to secondary memory before being replaced or if it can simply be removed.
  • Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
  • Example : When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
Question 169
What is the size of the physical address space in a paging system which has a page table containing 64 entries of 11 bit each (including valid and invalid bit) and a page size of 512
A
211
B
215
C
219
D
220
Question 169 Explanation: 
Given data,
Page table entries = 64 entries
Page size = 512 bytes
Page table entry size =11 bits

Without valid/invalid bit
page table entry size is 11 bits to address each page frames
So with 11 bits there are 211 frames available
Number of frames = ( Physical address space / Frame size )
We know that frame size and page size both are same size
Physical address space = ( Number of frames* Frame size)
Physical address space = 211* 512 byte == 211* 29 byte = 220 Bytes

With valid/invalid bit
Paging bits = 11 – 1 = 10 (As 1 valid bit is also included)
Therefore 10 bits are remaining for addressing each page frames.
1 bit is to identify page is valid or invalid.
We know that frame size and page size both are same size
Physical address space = ( Number of frames* Frame size)
Physical address space = 210 * 512 byte =  210 * 29 byte =219 bytes.
Question 170
Which of the following is not an optimization criterion in the design of a CPU scheduling algorithm?
A
Minimum CPU utilization
B
Maximum throughput
C
Minimum turnaround time
D
Minimum waiting time
Question 170 Explanation: 
-Minimum CPU utilization is not an optimization criterion.
-Optimization techniques and scheduling algorithms are used to bring the best CPU performance.

Scheduling Algorithm Optimization Criteria
-Maximum CPU utilization
-Maximum throughput
-Minimum turnaround time
-Minimum waiting time
-Minimum response time
Question 171
At a particular time of computation, the value of a counting semaphore is 10. Then 12 P operations and “x”  V operations were performed on this semaphore. If the final value of semaphore is 7, x will be :
A
8
B
9
C
10
D
11
Question 171 Explanation: 
Given,
Counting Semaphore value (S) = 10
P operations = 12
V operations = x
Final Semaphore value (S) = 7

P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.

Initially  value of a counting semaphore  S= 10
After 12 P operation are performed "S" Value became = -2
“x” V operations are performed on this semaphore S
now Final value of counting semaphore S = 7
x + (-2) = 7
x -2 = 7
x = 7+2
x = 9.
Question 172
In a paged memory, the page hit ratio is 0.40. The time required to access a page in secondary memory is equal to 120 ns. The time required to access a page in primary memory is 15 ns. The average time required to access a page is
A
105
B
68
C
75
D
78
Question 172 Explanation: 
Given,
Page Hit Ratio is = 0.40
Secondary Memory Access Time=120 ns
Primary Memory Access Time=15 ns
Miss Ratio =1 – hit ratio
Average access time= ?

Average access time = hit ratio * primary memory access time + (1 – hit ratio) * secondary memory access time

Average access time = 0.4 * 15 + 0.6 * 120
Average access time = 6 + 72
Average access time = 78.
Question 173
In a multi-user operating system, 30 requests are made to use a particular resource per hour, on an average. The probability that no requests are made in 40 minutes, when arrival pattern is a poisson distribution, is
A
E-15
B
1 - e-15
C
1 - e-20
D
E-20
Question 173 Explanation: 
arrival pattern is a poison distribution
Question 174
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O ?
A
I/O protection is ensured by operating system routines.
B
I/O protection is ensured by a hardware trap.
C
I/O protection is ensured during system configuration.
D
I/O protection is not possible.
Question 174 Explanation: 
  • User applications are not allowed to perform I/O in user mode
  • All I/O requests are handled through system calls that must be performed in kernel mode.
  • The kernel is a trusted software to OS, but all other programs are considered un-trusted software.
  • Thus, all user mode software must request use of the kernel by means of a system call in order to perform privileged instructions, such as process creation or input/output operations.
Question 175
Which UNIX/Linux command is used to make all files and sub-directories in the directory “progs” executable by all users ?
A
Chmod−R a+x progs
B
Chmod−R 222 progs
C
Chmod−X a+x progs
D
Chmod−X 222 progs
Question 175 Explanation: 
chmod− R a+x progs is used to make all files and sub-directories in the directory “progs” executable by all users
Question 176
Which of the following statements are true ?
(a) External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is contiguous.
(b) Memory Fragmentation can be internal as well as external.
(c) One solution to external Fragmentation is compaction.
A
(a) and (b) only
B
(a) and (c) only
C
(b) and (c) only
D
(a), (b) and (c)
Question 176 Explanation: 
Statement(A) : False
External Fragmentation exists where there is enough total memory space to satisfy a request but available space is not contiguous.

Statement(B) : True
Memory Fragmentation can be internal as well as external.

Statement(C) : True
One solution to external Fragmentation is compaction or shuffle memory contents.
Best Fit Block Search is the solution for internal fragmentation
Question 177
Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are
A
Page Access information
B
Virtual Page number
C
Page Frame number
D
Both virtual page number and Page Frame Number
Question 177 Explanation: 
  • Page information in memory is also called as Page Table
  • A page table entry must contain Page frame number.
  • Virtual page number is typically used as index in page table to get the corresponding page frame number.
Question 178
Consider a virtual page reference string 1, 2, 3, 2, 4, 2, 5, 2, 3, 4. Suppose LRU page replacement algorithm is implemented with 3 page frames in main memory. Then the number of page faults are
A
5
B
7
C
9
D
10
Question 178 Explanation: 
Given String,
1, 2, 3, 2, 4, 2, 5, 2, 3, 4
LRU page replacement algorithm Total number of page faults are 7
Question 179
In which of the following scheduling criteria, context switching will never take place ?
A
ROUND ROBIN
B
Preemptive SJF
C
Non-preemptive SJF
D
Preemptive priority
Question 179 Explanation: 
In Non – preemptive algorithms context switching will never take place because It doesn’t allow to switch the process until it is completed fully.

Except Non-Preemptive SJF , All are Preemptive.
  • ROUND ROBIN will switch because of time quantum
  • Preemptive SJF will switch because if a smaller burst time process arrives in ready queue Preemptive priory
  • Preemptive priory will switch if higher priority process enters the ready queue
Question 180
Among all given option, ___ must reside in the main memory.
A
Assembler
B
Compiler
C
Linker
D
Loader
Question 180 Explanation: 
The loader is a program that is responsible for loading the object program from the secondary memory into the main memory for execution of the program. The loader must resides in main memory.
Question 181
The process executes the following code and after execution ____ number of child process get created.
fork();
fork();
fork();
fork();
A
4
B
1
C
15
D
16
Question 181 Explanation: 
**Number of child processes with "n" fork statements, there are always 2n – 1 child processes are created**

Given,
number of times fork() is invoked "n"=4,
Number of child processes = 2n – 1
= 24 – 1
=15
  fork ();    // Line 1
  fork ();   // Line 2
  fork ();   // Line 3
  fork ();  // Line 4

         L1              // Now 1 child process is created by line 1
      /       \
    L2         L2        // Now 2 child processes are created by line 2
   /  \       /    \
  L3  L3     L3     L3    // Now 4 child processes are created by line 3
 / \  / \    /  \   / \
L4 L4 L4 L4 L4  L4 L4 L4   // Now 8 child processes are created by line 4


Question 182
Find the effective access time for the memory for given data. Page fault service time=8ms Average memory access time=20ms One page fault is generated for every memory access=106
A
29ns
B
33ns
C
28ns
D
30ns
Question 182 Explanation: 
Given,
Page fault service time =8 ms
Average memory access time = 20 ms
One page fault is generated for every 106 memory accesses
Page fault rate = 1/106
Let P be the page fault rate
Effective Memory Access Time = p * (page fault service time) + (1 - p) * (Memory access time)
Effective Memory Access Time =(1/106) *8*106 +(1-(1/106)) * 20 ms
Effective Memory Access Time = 28ns
Question 183
Identify the true statement from the given statements
(1) FIFO is non preemptive
(2) Round robin is non preemptive
(3) Multilevel queue scheduling is non preemptive
A
(1)
B
(1) and (2)
C
(1),(2) and (3)
D
(2)
Question 183 Explanation: 
  1. FIFO scheduling is non-preemptive.
  2. Round robin is preemptive.
    Round robin scheduling is similar to FIFO but pre-emption is added to enable the system to switch between processes. A small unit of time called time quantum is defined. CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval.
  3. Multilevel Queue Scheduling is preemptive.
    A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, based on some property of the process. Each queue has its own scheduling algorithm. Scheduling among queue is done by fixed priority pre-emptive scheduling.
Question 184
In the disk, swap space is used to _____
A
Save XML files
B
Save process data
C
Save drivers
D
Save HTML files
Question 184 Explanation: 
  • Swap space on the disk is used by Operating System to store the pages that are swapped out of the memory due to less memory available on the disk.
  • Swap space in Linux is used to increase the amount of apparent memory available on the system or when the amount of physical memory (RAM) is full.
  • If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space.
  • While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM.
  • Swap space is located on hard drives, which have a slower access time than physical memory. At any rate, Linux supports swap space in two forms: as a separate disk partition or a file somewhere on your existing Linux file systems.
  • If the system is thrashing because of lack of physical RAM and swap. Swap files are a good way to add swap on demand.
Question 185
In a system, counting semaphore was initialized to 10. Then 6P(wait) operations and 4V (signal) operations were completed on this semaphore. So ___ is the final value of the semaphore.
A
7
B
8
C
13
D
12
Question 185 Explanation: 
Given,
Counting Semaphore value (S) = 10
P(wait) operations = 6
V(signal) operations = 4
Final Semaphore value (S) = ?

P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.

Initially  value of a counting semaphore  S= 10
After 6 P operation are performed "S" Value became = 10-6=4
“4” V operations are performed on this semaphore S
now Final value of counting semaphore S = 4+4=8
Question 186
Decreasing the RAM causes ______
A
Fewer page faults
B
More page faults
C
Virtual memory get increases
D
Virtual memory get decreases
Question 186 Explanation: 
  • A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM.
  • The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
  • Increasing the physical RAM on your system could produce fewer page faults, even though designing your application differently will do much better than adding RAM.
  • In general Decrease the physical RAM on your machine could result in more page faults
Question 187
The maximum combined length of the command-line arguments including the spaces between adjacent arguments is:
A
128 characters
B
256 characters
C
67 characters
D
It may very from one OS to another
Question 187 Explanation: 
  • Windows NT 3.1 is an operating system that was produced by Microsoft as part of the Windows NT family of operating systems it accepts 128 characters
  • Microsoft Windows NT Workstation 3.1 accepts 128 characters
  • Windows NT 4.0 accepts 2046 characters
  • Windows 200 accepts 2046 characters
  • On computers running Microsoft Windows XP or later, the maximum length of the string that you can use at the command prompt is 8191 characters.
Likewise the number of characters that the command line can have differs for Windows, UNIX, LINUX, DOS etc. As we can see from the details given for Windows version, as the version changes the number command line arguments also change. The latest version always accepts more number of command line inputs
Question 188
Which of the following is not an input device?
A
Mouse
B
Keyboard
C
Light Pen
D
VDU
Question 188 Explanation: 

Input device
  • Mouse
  • Keyboard
  • Light Pen
VDU stands for visual/video display unit: a screen on which information from a computer can be shown.

Light pen  allows the user to point to displayed objects or draw on the screen in a similar way to a touchscreen but with greater positional accuracy.

Refer more about light pen:
https://en.wikipedia.org/wiki/Light_pen#:~:text=A%20light%20pen%20is%20a,ray%20tube%20(CRT)%20display.&text=A%20light%20pen%20detects%20changes,this%20event%20to%20the%20computer.
Question 189
The Banker's algorithm is used:
A
To rectify deadlock
B
To prevent deadlock
C
To detect deadlock
D
To detect and solve deadlock
Question 189 Explanation: 
The Banker algorithm, sometimes referred to as the detection algorithm, is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
Source : wikipedia.org
Question 190
Which of the following statements about semaphores is true?
A
P and V operations should be indivisible operations
B
A semaphore implementation should guarantee that threads do not suffer indefinite postponement
C
If several threads attempt a P(S) operation simultaneously, only one thread should be allowed to proceed.
D
All of the above
Question 190 Explanation: 
  • P and V operations should be indivisible operations. - True
  • A semaphore implementation should guarantee that threads do not suffer indefinite postponement.- True
  • If several threads attempt a P(S) operation simultaneously, only one thread should be allowed to proceed. - True
 
Question 191
The file manager is responsible for
A
Naming files
B
Saving files
C
Deleting files
D
All the above
Question 191 Explanation: 
The File Manager is a system software responsible for the creation, deletion, modification of the files and managing their access, security and the resources used by them. These functions are performed in collaboration with the Device Manager.
Question 192
In real time operating systems, which of the following is the most suitable scheduling scheme?
A
Round Robin
B
First come first serve
C
Preemptive
D
Random scheduling
Question 192 Explanation: 
  • Preemptive scheduling is the most suitable scheduling scheme for real time operating systems as it can always preempt other processes based on priority.
  • In real time operating systems, we have different tasks to perform simultaneously and also there are a few tasks which need to be performed first at priority basis.
  • Real-time systems requires that results be produced within a specified deadline.
  • One of the key feature of Real-time system is its ability to respond to real-time process as soon as Process requires CPU As a result the scheduler for real-time system must support priority-based algorithm with preemption
Question 193
If there are 32 segments, each of size 1 K byte, then the logical address should have
A
13 bits
B
14 bits
C
15 bits
D
16 bits
Question 193 Explanation: 
Given data,
Number of segment = 32 = 25
Size of each segment = 1K bytes =210
Size of Logical address space = Number of segment * Size of each segment
Size of Logical address space = 25 * 210
Size of Logical address space = 215 =32kB
Number of bits required to represent Logical address space = 15 bits

Step by step explanation :

To specify a particular segment 5 bits are required.
we can written 32 as 25 = 32 = So, 5 bits are required.

Size of each segment = 1K bytes
we can written 1k = 210
10 bits are required to represent each segment

To select a particular byte after selecting a page/segment  we require 10+5=  15 bits
So Total is 10+5 =15

Note :
K=210
M =220
G=230
Question 194
Relocation bits used by relocating loader are specified by
A
Relocating loader itself
B
Linker
C
Assembler
D
Macro Processor
Question 194 Explanation: 
  • Relocation bits used by relocating loader are specified or generated by linker
  • A linker or link editor is a computer program that takes one or more object files generated by a compiler and combines them into a single executable file, library file, or another object file.
  • Linking: Combining two or more object programs into a single unit (“load module”).
  • Loading: Bringing an object program (load module) into memory for execution.
  • Relocation: Modifying an object program so that it can be loaded from a location different from the one originally specified.
Refer : https://www.albany.edu/~csi402/pdfs/lect_08.pdf
Question 195
The most powerful parser is
A
SLR
B
LALR
C
Canonical LR
D
Operator Precedence
Question 195 Explanation: 
Connonical (CLR) is the most powerful Parsers among all the LR(k) Parsers or SLR.
LR > LALR > SLR
 most powerful parser is CLR
Question 196
Serial access memories are useful in applications where
A
Data consists of numbers
B
Short access time is required
C
Each stored word is processed differently
D
Data naturally needs to flow in and out in serial form
Question 196 Explanation: 
  • Serial access memories are useful in applications where data naturally needs to flow in and out in serial form
  • Serial access memories do not require an address so they are useful in the system in which data travels serially below are the examples
    -SHIFT REGISTERS
    -Serial in Parallel Out (SIPO)
    -Parallel in Serial Out(PISO)
    -FIFO/LIFO QUEUES
    -TAPPED DELAY LINE
Question 197
Suppose P, Q and R are co-operating processes satisfying Mutual Exclusion condition. Then if the process Q is executing in its critical section then
A
‘P’ executes in critical section
B
‘R’ executes in critical section
C
Neither ‘P’ nor ‘Q’ executes in their critical section
D
Both ‘P’ and ‘R’ executes in critical section
Question 197 Explanation: 
Mutual execution means only one process can enter into the critical section at a time so when Process Q is in critical section then no other process can enter into the critical section.
Mutual execution
Question 198
How many wires are threaded through the cores in a coincided-current core memory?
A
2
B
3
C
4
D
6
Question 198 Explanation: 
  • Early systems used to have four wires: X, Y, Sense, and Inhibit, but later cores combined the latter two wires into one Sense/Inhibit line.
  • Each toroid stored one bit (0 or 1).
  • One bit in each plane could be accessed in one cycle, so each machine word in an array of words was spread over a "stack" of planes.
  • Each plane would manipulate one bit of a word in parallel, allowing the full word to be read or written in one cycle.
Refer : https://en.wikipedia.org/wiki/Magnetic-core_memory
Question 199
Which access method is used for obtaining a record from cassette tape?
A
Direct
B
Sequential
C
Random
D
Parallel
Question 199 Explanation: 
  • Tape Drive – Sequential Accessing method
  • Cassette Tape – Sequential Accessing method
  • Hard Disk – Both Random and Sequential Accessing methods
Question 200
A CPU generates 32 bit virtual addresses. The page size is 4KB The processor has a Translation Lookaside Buffer(TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is
A
11 bits
B
13 bits
C
15 bits
D
20 bits
Question 200 Explanation: 
Given data,
Virtual addresses = 32 bit = 25 bit,
Page size =4 KB=212 B,
TLB can hold a total of 128 page table entries
4-way set associative,

Size of a page = 4KB = we can write as 212 B  So, therefore 12 offset bits
Total number of bits needed to address a page frame = 32 – 12 = 20
It means out of 32 bit Virtual addresses 20 bits will be used for indexing.

If there are ‘x’ cache lines in a set then it is called x-way set associative
4-way set associative means there are 4 cache lines in a set

Number of sets in cache = 128/4 = 32 = 25= 5 bits
So, Therefore 5 bits are needed to address a set,
15 (20 – 5) bits are used for defining tag .
4KB page defines then offset = 12 bits
 4 way set associative
Question 201
Computer uses 46-bit virtual address, 32 bit physical address, and a three level paged page table organization. The page table base register stores the base address of the first level table(T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second level table (T2). Each entry of T2 stores the base address of a page of the third level table(T3). Each entry of T3 stores a page table entry(PTE). The PTE is 32 bits in size, The processor used in the computer has a 1MB 16 way set associative virtually indexed physically tagged cache. the cache block size is 64 bytes. What is the size of a page in KB in this computer?
A
2
B
4
C
8
D
16
Question 201 Explanation: 
 size of a page in KB  in this computer
Question 202
Consider the following snapshot of a system running n processes. Process i is holding Xi instances of a resource R, 1<=i<=n. Currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional Yi instances while holding the Xi instances it already p and q such that Yp=Yp=0. which one of the following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?
A
Min(Xp,Xq)i)where i!=p and i!=q
B
Xp+Xq>=min(Yi)where i!=p and i!=q
C
Max(Xp,Xq) >1
D
Min(Xp,Xq)>1
Question 202 Explanation: 

Xp+Xq >= min (Yk) where k != p and k != q

Necessary condition to guarantee no deadlock which means without satisfying this condition, no deadlock is possible.

Both the process p and q have no additional requirement and can be finished releasing Xp + Xq resources without asking for any additional resource.

Using this, we can finish one more process only if condition B is satisfied. If the resources released by p and q are sufficient for another process waiting for Yk resources, then system is not approaching deadlock.

Note: Condition B just ensures that the system can proceed from the current state. It does not guarantee that there won't be a deadlock before all processes are finished.

Refer : https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/7_Deadlocks.html
Question 203
A system has n resources R0,..,Rn-1 and k processes P0,..Pk-1. The implementation of the resources request logic of each process Pi is a follows:
if(i%2==0)
{
 if(i < n) request Ri
 if(i+2 < n) request Ri+2
}
else
{
 if(i < n) request Rn-i
 if(i+2 < n) request Rn-i-2
}
In which one of the following situations is a deadlock possible?
A
N=40, k=26
B
N=21, k=12
C
N=20, k=10
D
N=41, k=19
Question 203 Explanation: 
  • Deadlock if Even and odd process request overlaps at some point. It will happen only if n is odd.
  • So we have only two options left that is  option(B) and option(D).
  • If you observe for even number process resource allocation is happening from forward direction but for odd number process resource allocation is happening from backward direction.
  • But in option (D) we have more then 2∗k resources so overlap is not possible. Hence overlap will happen only in option(B).
Question 204
A system contains three programs and each requires three tape units for its operation. The minimum number of tape units which the system must have such that deadlocks never arise is__
A
6
B
7
C
8
D
9
Question 204 Explanation: 
Given data,
System contain 3 process and each process requires 3 tapes

In-order to get the minimum number of resources Assign two resources to each process therefore total 6 resources for 3 process and all three process are waiting for 1 more resource.

Therefore, all of them will have to wait indefinitely. So, assign one more resource to one of the process and that process will complete its operation

After completion of its operation of one process ,all resources(3 tapes) allocated to that process are free(released)

Minimum tapes required are = (3 process * 2 tapes units) + 1 more tape unit = 7
Question 205
The process of entering data into a storage location
A
Causes variation in its address number
B
Adds to the contents of the location
C
Is called a readout operation
D
Is destructive of previous contents
Question 205 Explanation: 
  • The process of entering data into a storage location is destructive of previous contents
  • Using memory management techniques we replace `new data with existing data in memory for example we use LRU in paging for replacing existing data
Question 206
For Providing large storage space the semiconductor based storage is not permitted now a days due to their:
A
Lack of sufficient resources
B
High cost per bit value
C
Lack of speed of operation
D
High cost of raw material
Question 206 Explanation: 
  • For Providing large storage space the semiconductor based storage is not permitted now a days due to their High cost per bit value
  • In semi conductor based memory technology, usually we get high speed but with increase in the integration of various devices the cost is very high.
Question 207
Let the next generation computer systems of 64 bit has virtual address space is of the same size as the physical address space. In such a case, if we remove the virtual space completely in new systems then which one of the following is true in this scenario?
A
Efficient implementation of multi user supports is no longer possible
B
The processor cache organization can be made more efficient now
C
Hardware support for memory management is no longer needed
D
CPU scheduling can be made more efficient now
Question 207 Explanation: 
Option(A) is the correct Answer.

Virtual memory provides
  1. Increased address space for processes
  2. Memory protection
  3. Relocation

So, when we don't need more address space, even if we get rid of virtual memory, we need hardware support for the other two i.e memory protection and Relocation.

Without hardware support for memory protection and relocation, we can design a system (by either doing them in software or by partitioning the memory for different users) but those are highly inefficient mechanisms. i.e., there we have to divide the physical memory equally among all users and this limits the memory usage per user and also restricts the maximum number of users.
Question 208
Consider a disk pack with 32 surfaces, 64 tracks and 512 sectors per pack. 256 bytes of data are stored in a bit serial manner in a sector. The number of bits required to specify a particular sector in the disk is
A
19
B
20
C
18
D
22
Question 208 Explanation: 
Given,
Number of surfaces = 32(25)
Number of tracks per surface = 64(26)
Number of sectors per track = 512(29)
Number of bytes per sector = 256 bytes(28)
Total number of sectors
= Total number of surfaces  * Number of tracks per surface  * Number of sectors per track
=32 * 64 * 512
=(25) * (26)  * (29)
=5+6+9
=20 bits
To identify each sector uniquely  we require 20 bits
Question 209
Dirty bit is used to show the
A
Page with low frequency occurrence
B
Wrong page
C
Page with corrupted data
D
Page that is modified after being loaded into cache memory
Question 209 Explanation: 
Dirty bit
  • A dirty bit or modified bit is a bit that is associated with a block of computer memory and indicates whether or not the corresponding block of memory has been modified.
  • The dirty bit is set when the processor writes to (modifies) this memory. The bit indicates that its associated block of memory has been modified and has not been saved to storage yet.
  • When a block of memory is to be replaced, its corresponding dirty bit is checked to see if the block needs to be written back to secondary memory before being replaced or if it can simply be removed.
  • Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
  • Example : When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
Question 210
A linker is given object module for a set of programs that were compiled separately. What information need not be included in an object module?
A
Object code
B
Relocation bits
C
Names and locations of all external symbols defined in the object module
D
Absolute addresses of internal symbols
Question 210 Explanation: 
Option(A) : Object code is trivially there is an object module.
Option(B) : Relocation bits must be there if we need to have relocation capability.
Option(C) : For linker to link external symbols it must know the location of all external symbols.
In object module it include Names and locations of all external symbols defined in the object module
Option(D) : Absolute addresses of internal symbols information need not be included in an object module
Question 211
The file structure that redefines its first record at a base of zero uses the term
A
Relative organization
B
Key fielding
C
Dynamic reallocation
D
All of these
Question 211 Explanation: 
A relative record file contains records ordered by their relative key, a record number that represents the location of the record relative to where the file begins. The relative record number identifies the fixed- or variable-length record.

For example :
  • In a fixed-length relative file organization that contains 10 slots, the first slot has a relative record number of 1, and the tenth slot has a relative record number of 10.
  • In a variable-length relative file organization, the records are ordered according to their relative record number. Records are stored and retrieved according to the relative record number that you set.
Refer this link for more information :
https://www.ibm.com/support/knowledgecenter/SS6SG3_6.1.0/pg/concepts/cpvsm04.html
Question 212
A long-term monitor
A
Should show any immediate performance problems
B
Should show I/O, paging and processor activity
C
Need show only the I/O and processor activity
D
Usually reports only on terminal displays
Question 212 Explanation: 
A long-term monitor should show I/O, paging and processor activity
Question 213
Determine the number of page faults when references to pages occur in the following order:1,2,4,5,2,1,2,4. Assume that the main memory can accommodate 3 pages and the main memory already has the pages 1 and 2, with page 1 having been brought earlier than page 2.(LRU algorithm is used)
A
3
B
5
C
4
D
None of these
Question 213 Explanation: 
Number page faults = 4
Question 214
Working set(t,k) at an instant of time,t,is
A
The set of k future references that the operating system will make
B
The set of future references that the operating system will make in the next 'k' time units
C
The set of k references with high frequency
D
The set of pages that have been referenced in the last k time units
Question 214 Explanation: 
Working set(set of pages referenced by the last t memory references , at time)
Working set(t,k) : the working set at time k (with window t) is the set of pages referenced by the last t memory references ending at reference k
Question 215
The shell
A
Accepts command from the user
B
Maintains directories of files
C
Translates the keyboard character codes
D
None of these
Question 215 Explanation: 
  • A shell is a special user program which provides an interface to the user to use operating system services.
  • Shell accept human readable commands from the user and convert them into something which kernel can understand.
  • It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal.
Question 216
Software that measures, monitors, analyzes and controls real world events is called
A
System Software
B
Real time software
C
Scientific Software
D
Business Software
Question 216 Explanation: 
Software that measures, monitors, analyzes, and controls real-world events is called real-time software
Question 217
When a computer is first turned on or restart, a special type of absolute loader is executed called
A
"Compile and GO" loader
B
Boot loader
C
Bootstrap loader
D
Relating Loader
Question 217 Explanation: 
Bootstrap loader
Question 218
Which of the following is the internal memory of the system(computer)?
A
CPU register
B
Cache
C
Main memory
D
All of these
Question 218 Explanation: 
Types of Computer Memory
Question 219
Suppose for a process P, reference to pages in order are 1, 2, 4, 5, 2, 1, 2, 4. Assume that main memory can accommodate 3 pages and the main memory has already pages 1 and 2 in the order 1 - first, 2- second. At this moment, assume FIFO Page Replacement Algorithm is used then the number of page faults that occur to complete  the execution of process P is
A
6
B
4
C
3
D
5
Question 219 Explanation: 
FIFO Page Replacement Algorithm
Question 220
______________ system call creates new process in Unix.
A
Fork
B
Fork new
C
Create
D
Create new
Question 220 Explanation: 
A new process is created by fork() system call in UNIX. fork() system call returns a process ID which is generally the process id of the child process created.
Question 221
A process residing in main memory and ready and waiting for execution, is kept on
A
Job Queue
B
Execution Queue
C
Wait Queue
D
Ready Queue
Question 221 Explanation: 
A process residing in main memory and ready and waiting for execution, is kept on Ready Queue
process state transition diagram
Question 222
In which one of the following pages replacement policies, Belady's anomaly may occur?
A
FIFO
B
LRU
C
Optimal
D
MRU
Question 222 Explanation: 
Belady’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in following page replacement algorithms
First in first out (FIFO)
Second chance algorithm
Random page replacement algorithm
Question 223
A process that is based on IPC mechanism which executes on different systems and can communicate with other processes using message based communication is called__
A
Local Procedure call
B
Remote procedure call
C
Inter Process Communication
D
Remote Machine Invocation
Question 223 Explanation: 
 remote procedure call
Question 224
consider a system with m resources of same type being shared by n processes. Resources can be requested and release by processes only one at a time. The system is deadlock free if and only if:
A
The sum of all max needs is
B
The sum of all max needs is >m+n
C
Both of above
D
None
Question 224 Explanation: 
Let,
N = Sum of all Need(i),
A = Sum of all Allocation(i),
M = Sum of all Max(i).

Use contradiction to prove :
  • Assume this system is not deadlock free. If there exists a deadlock state, then A = m because there's only one kind of resource and resources can be requested and released only one at a time.
  • From condition b, N + A = M < m + n. So we get N + m < m + n. So we get N < n.
  • It shows that at least one process i that Need(i) = 0.
  • From condition a, Pi can release at least 1 resource.
  • So there are n-1 processes sharing m resources now, condition a and b still hold.
  • Go on the argument, no process will wait permanently, so there's no deadlock.
Question 225
Semaphores are used to solve the problem of
I. Race Condition
II. Process Synchronization
III. Mutual Exclusion
IV. None of the above
A
I and II
B
II and III
C
All of the above
D
None of the above
Question 225 Explanation: 
***Question setter may be drunk while framing the question*****
Semaphores main job is to provide Mutual exclusion to ensure process synchronization and Race condition is the problem and to solve that problem we need to do Mutual exclusion and Process Synchronization


Semaphores in Process Synchronization
Question 226
If there are 32 segments, each size 1K bytes, then the logical address should have
A
13 bits
B
14 bits
C
15 bits
D
16 bits
Question 226 Explanation: 
Given data,
Number of segment = 32 = 25
Size of each segment = 1K bytes =210
Size of Logical address space = Number of segment * Size of each segment
Size of Logical address space = 25 * 210
Size of Logical address space = 215 =32kB
Number of bits required to represent Logical address space = 15 bits

Step by step explanation :

To specify a particular segment 5 bits are required.
we can written 32 as 25 = 32 = So, 5 bits are required.

Size of each segment = 1K bytes
we can written 1k = 210
10 bits are required to represent each segment

To select a particular byte after selecting a page/segment  we require 10+5=  15 bits
So Total is 10+5 =15

Note :
K=210
M =220
G=230
Question 227
Suppose a system contains n processes and system uses the round-robin algorithm for CPU scheduling then which data structure is best suited for ready queue of the process
A
Stack
B
Queue
C
Circular queue
D
Tree
Question 227 Explanation: 
  • Circular queue is the best data structure for round-robin CPU scheduling algorithm
  • Round-robin scheduling allocates each task an equal share of the CPU time called time quantum.
  • When a process is given the CPU, a timer is set for whatever value has been set for a time quantum. If the process finishes its burst before the time quantum timer expires, then it is swapped out of the CPU just like the normal FCFS algorithm. If the timer goes off first, then the process is swapped out of the CPU and moved to the back end of the ready queue.
  • The ready queue is maintained as a circular queue, so when all processes have had a turn, then the scheduler gives the first process another turn, and so on.
  • Process are in a circular queue and when a Process allocated CPU time expires, the Process is put to the end of the queue and the new Process is taken from the front of the queue.
Question 228
At a particular time of computation the value of a counting semaphore is 7. Then 20 P operations and 15 V operations were completed on this semaphore. The resulting value of the semaphore is :
A
42
B
2
C
7
D
12
Question 228 Explanation: 
Given,
Counting Semaphore value (S) = 7
P operations = 20
V operations = 15
Final Semaphore value (S) = ?

P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.

Initially  value of a counting semaphore  S= 7
After 20 P operation are performed "S" Value became = 7-20=-13
“15” V operations are performed on this semaphore S
now Final value of counting semaphore S = -13+15=2
Question 229
Increasing the RAM of a computer typically improves performance because:
A
Virtual Memory increases
B
Larger RAMs are faster
C
Fewer page faults occur
D
Fewer segmentation faults occur
Question 229 Explanation: 
  • A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM.
  • The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
  • Increasing the physical RAM on your system could produce fewer page faults, even though designing your application differently will do much better than adding RAM.
  • In general Decrease the physical RAM on your machine could result in more page faults
Question 230
Consider the following program.
main()
{
fork();
fork();
fork();
}
How many new processes will be created?
A
8
B
6
C
7
D
5
Question 230 Explanation: 
**Number of child processes with "n" fork statements, there are always 2n – 1 child processes are created**

Given,
Number of times fork() is invoked "n"=3,
Number of child processes = 2n – 1
= 23 – 1
=8-1
=7
  fork ();    // Line 1
  fork ();   // Line 2
  fork ();   // Line 3
 

         L1              // Now 1 child process is created by line 1
      /      \
    L2        L2        // Now 2 child processes are created by line 2
   /  \     /    \
  L3  L3   L3     L3    // Now 4 child processes are created by line 3

4+2+1=7 child process created total.

Question 231
Which of the following statements is FALSE?
A
The long term scheduler controls the degree of multiprogramming
B
Multiple process of a single program cannot exist
C
Ready queue of the processes resides in main memory
D
A process can have multiple sub processes
Question 231 Explanation: 
Option(A) : The long term scheduler controls the degree of multiprogramming - True
  • A long-term scheduler determines which programs are admitted to the system for processing.
  • It selects processes from the queue and loads them into memory for execution.
  • Process loads into the memory for CPU scheduling.
  • The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound.
  • long-term scheduler controls the degree of multiprogramming.
  • If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.

Option(B) : Multiple process of a single program cannot exist - False
  • A computer program is a passive collection of instructions, a process is the actual execution of those instructions.
  • Several processes may be associated with the same program,
  • For example, opening up several instances of the same program often means more than one process is being executed.

Option(C) : Ready queue of the processes resides in main memory - True

Option(D) : A process can have multiple sub process -True
Question 232
Dirty bit for a page in a page table
A
Helps avoid unnecessary writes on a paging device
B
Helps maintain LRU information
C
Allows only read on a page
D
None of the above
Question 232 Explanation: 
Dirty bit for a page in a page table Helps avoid unnecessary writes on a paging device
  • A dirty bit or modified bit is a bit that is associated with a block of computer memory and indicates whether or not the corresponding block of memory has been modified.
  • The dirty bit is set when the processor writes to (modifies) this memory. The bit indicates that its associated block of memory has been modified and has not been saved to storage yet.
  • When a block of memory is to be replaced, its corresponding dirty bit is checked to see if the block needs to be written back to secondary memory before being replaced or if it can simply be removed.
  • Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
  • Example : When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
Question 233
Consider three processes(process id 0,1,2 respectively) with compute time bursts 2,4 and 8 time units. All processes arrive at time zero. Consider the Longest remaining time first(LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turnaround time is:
A
13 units
B
14 units
C
15 units
D
16 units
Question 233 Explanation: 
average turn around time
Question 234
Banker’s algorithm is used for:
A
Deadlock avoidance
B
Deadlock recovery
C
Deadlock resolution
D
Deadlock prevention
Question 234 Explanation: 
The Banker algorithm, sometimes referred to as the detection algorithm, is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
Source : wikipedia.org
Question 235
Consider three processes, all arriving at time zero, with total execution time of 10,20 and 30 units, respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing computations, and the last 10% of time doing I/O again. The operating system uses a shortest remaining compute time first scheduling algorithm and schedules a new process either when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can be overlapped as much as possible. For what percentage f time does the CPU remain idle?
A
0%
B
10.6%
C
30.0%
D
89.4%
Question 235 Explanation: 
Percentage of time CPU remains idle
Question 236
Consider three CPU-intensive processes which require 10,20 and 30 time units and arrive at times 0,2 and 6, respectively. How arrive at times 0,2 and 6, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
A
1
B
2
C
3
D
4
Question 236 Explanation: 
context switches
Question 237
Which of the following process scheduling algorithm may lead to starvation?
A
FIFO
B
Round Robin
C
Shortest Job Next
D
None of the option
Question 237 Explanation: 
  • First-In-first-Out- No starvation
  • Round Robin- No starvation
  • Shortest job next -It may lead to process starvation for processes which will require a long time to complete if short processes are continually added
Question 238
A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with priority zero(the lowest priority). The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule. Which one of the following is TRUE if the processes have no I/O operations and all arrive at time zero?
A
This algorithm is equivalent to the first come first serve algorithm
B
This algorithm is equivalent to the round-robin algorithm
C
This algorithm is equivalent t the shortest-job-first algorithm
D
This algorithm is equivalent to the shortest-remaining time-first algorithm
Question 238 Explanation: 
"The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule."
-This means there must be a time quantum T exists , without time quantum scheduler can not re evaluates the process priorities.

The scheduling algorithm works as round robin with quantum time equals to T. After a process is scheduled it gets executed for T time units and waiting time becomes least and it again gets chance when every other process has completed T time units
Question 239
Suppose two jobs, each of which needs 10 minutes of CPU time, start simultaneously. Assume 50% I/O wait time. How long will it take for both to complete, if they run sequentially?
A
10
B
20
C
30
D
40
Question 239 Explanation: 
Given,
There are 2 jobs each needs 10 minutes of cpu time,
I/O Wait time = 50%
"50% I/O wait time” means 50% of the time CPU is waiting for I/O.
Therefore,
Total CPU Time = 2x.
Then x time CPU is doing I/O operation and x time running the processes.
Time taken to run both process=10+10=20 mins.
Total time taken = 2*20=40 mins.
Note : Given key is option(C)
Question 240
Which of the following scheduling algorithm could result in starvation?
A
Priority
B
Round Robin
C
FCFS
D
None of the above
Question 240 Explanation: 
  • Priority - It may leads to starvation if higher priority process called again and again, lower priority starves
  • Round Robin - No starvation
  • First-come-first-served- No starvation
Question 241
Consider a system having 'm' resources of the same type. these resources are shared by 3 processes A,B,C; which have peak time demands of 3,4,6 respectively. The minimum value of 'm' that ensures that deadlock will never occur is
A
11
B
12
C
13
D
14
Question 241 Explanation: 
A requires 3, B requires 4, C requires 6

In worst case, The number of units that each process holds = One less than its maximum demand
  • Process A holds 2 units of resource R
  • Process B holds 3 units of resource R
  • Process C holds 5 units of resource R

Maximum number of units of resource R that ensures deadlock = 2 + 3 + 5 = 10
If we assign 1 more resource to any of the process that process use these resource and complete its operations  and then it release all its resources which are holding by this process.
Minimum number of units of resource R that ensures no deadlock = 10 + 1 = 11

So, any number of units greater than 11 will ensure no deadlock.

Note : They are asking Minimum value so 11 is the correct answer
Question 242
Which of the following is added to page table in order to track whether a page of cache has been modified since it was read from the memory?
A
Reference bit
B
Dirty bit
C
Tag Bit
D
Valid Bit
Question 242 Explanation: 
Dirty bit is added to page table
Question 243
The time taken to switch between user and kernel modes of execution be t1 while the time taken to switch between two processes be t2. Which of the following is TRUE?
A
t1>t2
B
t1=t2
C
t1<t2
D
Nothing can be said about the relation between t1 and t2
Question 243 Explanation: 
**Option(C) T1 < T2**
  • Time taken to switch between user and kernel models is LESS THEN the time taken to switch between two processes.
  • Time taken to switch between 2 processes is very high as compared to time taken to switch between kernel and user mode of executioacn because
  • We need to do a context switch when we switch processes and we need to save the previous process PCB(Process control Block) and its registers and then load the new process PCB and its registers
  • But when you switch between Kernel and User mode is a very fast operation because(OS has to just change single bit at hardware level)
  • However, context switch involves a switch to the kernel mode too But switching to kernel mode is not so fast. It is a system call.
Question 244
A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units then, deadlock.
A
Can never occur
B
Has to occur
C
May occur
D
None of the options
Question 244 Explanation: 
  • If the system is deadlocked, it implies that each process is holding 1 resource and is waiting for one more.
  • Since there are three processes and four resources, 1 process must be able to obtain 2 resources.
  • This process requires no more resources  an it complete all its operations. Once it done it will release all its resources(2 resources) which are holding by this process.
Question 245
Which of the following is the first module of a language processing system?
A
Preprocessor
B
Loader
C
Compiler
D
Linker
Question 245 Explanation: 
compiler phases
Question 246
The total number of page faults for the reference string
1,2,3,4,5,6,7,8,9,10
using FIFO page replacement policy for a process m if 3 frames are allocated to its are:
A
9
B
10
C
8
D
11
Question 246 Explanation: 
number of page faults
Question 247
The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 records per page, with 1 free main memory frame is recorded as follows. What is the number of page faults ?
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0370
A
15,4
B
6,4
C
7,2
D
4,6
Question 247 Explanation: 
Page size = 100 records
Frame size = 100 records
  • 0100−1 page fault       → Records Range [0100−0199 ]
  • 0200−2  page faults  → Range[0200−0299]
  • 0430−3  page faults. → Range[0400−0499]
  • 0499−3  page faults. → Range[ 0400−0499]
  • 0510−4  page faults. → Range[ 0500−0599 ]
  • 0530−4  page faults → Range[ 0500−0599 ]
  • 0560−4 page faults → Range[ 0500−0599 ]
  • 0120−5  page faults → Range[ 0100−0199 ]
  • 0220−6  page faults → Range[ 0200−0299 ]
  • 0240−6  page faults → Range[ 0200−0299 ]
  • 0260−6 page faults → Range[ 0200−0299 ]
  • 0320−7  page faults → Range[ 0300−0399 ]
  • 0370−7  page faults → Range[ 0300−0399 ]
Total number of page fault is 7.
Question 248
State whether TRUE or FALSE
(i) Shortest remaining time (SRT) algorithm is the preemptive version of the shortest job next(SJN) CPU scheduling algorithm
(ii) A “context switch” is the mechanism to store and restore the state or context of a CPU in PCB
A
(i) True, (ii) True
B
(i) False, (ii) False
C
(i) False, (ii) True
D
(i) True, (ii) False
Question 248 Explanation: 
Statement (i) :  True
Shortest remaining time (SRT) algorithm is the preemptive version of the shortest job next(SJN) CPU scheduling algorithm
Statement (ii) : True
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time.
Question 249
Operating system maintains the page table for:
A
Each process
B
Each thread
C
Each instruction
D
Each address
Question 249 Explanation: 
Paging
Question 250
Non contiguous memory allocation splits program into blocks of memory called ___ that can be loaded in non adjacent holes in main memory
A
Pages
B
Frames
C
Partition
D
Segments
Question 250 Explanation: 
  • Paging is a method or techniques which is used for non-contiguous memory allocation. It is a fixed size partitioning
  • In paging, program is divided into fixed or mounted size pages. In segmentation, program is divided into variable size sections called segment.
  • Segmentation is another non-contiguous memory allocation scheme like paging. like paging, in segmentation, process isn’t divided indiscriminately into mounted(fixed) size pages. It is variable size partitioning .
  • Segmentation is a programmer view of the memory where instead of dividing a process into equal size partition we divided according to program into partition called segments.
  • Non-contiguous memory allocation splits program into blocks of memory called segment that can be loaded in non-adjacent holes in main memory.
Question 251
A 3.5 inch micro floppy high density disk contains the data__
A
720 MB
B
1.44 MB
C
720 KB
D
1.44 KB
Question 251 Explanation: 
3.5 inch micro floppy capacity
Question 252
What is compaction?
A
A technique for overcoming internal fragmentation
B
A paging technique
C
A technique for overcoming external fragmentation
D
A technique for overcoming fatal error
Question 252 Explanation: 
Compaction is a technique for overcoming external fragmentation

fragmentation
Question 253
Process is in a ready state____
A
When process is scheduled to run after some execution
B
When process is unable to run until some task has been completed
C
When process is using the CPU
D
None of the above
Question 253 Explanation: 
Process is in a ready state when process is scheduled to run after some execution
process state transition diagram
Question 254
How many times the word "PROCESS" will be printed when executing the following program?
main()
{
  printf("process");
  fflush();
  fork();
  fork();
}

A
8
B
4
C
6
D
7
Question 254 Explanation: 
  • fflush()‏‏‎ is‏‏‎ typically‏‏‎ used‏‏‎ for‏‏‎ output‏‏‎ stream‏‏‎ only.‏‏‎ Its‏‏‎ purpose‏‏‎ is‏‏‎ to‏‏‎ clear‏‏‎ (or‏‏‎ flush)‏‏‎ the‏‏‎ output‏‏‎ buffer‏‏‎ and‏‏‎ move‏‏‎ the‏‏‎ buffered‏‏‎ data‏‏‎ to‏‏ console‏‏‎ (in‏‏‎ case‏‏‎ of‏‏‎ stdout)‏‏‎ or‏‏‎ disk‏‏‎ (in‏‏‎ case‏‏‎ of‏‏‎ file‏‏‎ output‏‏‎ stream). Hence, it will not affect the number of times PROCESS will be printed.
  • The‏‏‎ two‏‏‎ fork()‏‏‎ calls‏‏‎ create‏‏‎ 3‏‏‎ child‏‏‎ processes,‏‏‎ and‏‏‎ hence‏‏‎ “PROCESS”‏‏‎ will‏‏‎ be‏‏‎ executed‏‏‎ 4‏‏‎ times‏‏‎ if‏‏‎ we‏‏‎ don’t‏‏‎ use‏‏ fflush().‏‏‎
  • In the above code, if we put a '\n' at end of printf or use fflush(stdout); PROCESS will have been printed only 1 time.
Question 255
The open file table has a/an __ associated with each file.
A
File content
B
File permission
C
Open count
D
Close count
Question 255 Explanation: 
Operating System maintains a small table called the open-file table. It containing information about all open files.
  • Open-File table also has an open count associated with each file to indicate how many processes that have the file open.
  • Each close () decreases this open count, and when the open count reaches zero, the file is no longer in use, and the file's entry is removed from the open-file table.
Refer : http://boron.physics.metu.edu.tr/ozdogan/OperatingSystems/week11/node4.html
Question 256
The process of loading the operating system into memory is called:
A
Booting
B
Spooling
C
Thrashing
D
Formatting
Question 256 Explanation: 
  • The operating system is loaded through a bootstrapping process, more succinctly known as booting.
  • A boot loader is a program whose task is to load a bigger program, such as the operating system
  • Booting is the process of starting a computer.
  • Booting can be initiated by hardware such as a button press, or by a software command. After it is switched on, a computer's central processing unit (CPU) has no software in its main memory, so some process must load software into memory before it can be executed.
  • This may be done by hardware or firmware in the CPU, or by a separate processor in the computer system.
Question 257
Starvation can be avoided by which of the following statement:
i. By using shortest job first resource allocation policy
ii. By using first come first serve resources allocation policy
A
I only
B
I and ii only
C
Ii only
D
None of the options
Question 257 Explanation: 
  • Shortest Job First (SJF) algorithm is a scheduling algorithm that offers the minimum average turnaround time.
  • This algorithm can be implemented as either preemptive or non-preemptive.
  • In a preemptive shortest job first algorithm, the process currently running is forced to give up the processor for a new arrival process with a shorter burst time.
  • The preemptive shortest job first algorithm is also known as the shortest remaining time (SRT) algorithm .
  • In a non-preemptive shortest job first algorithm, the scheduler assigns the processor to the shortest process.
  • Even if a shorter process becomes available, the process currently running will continue to execute until it is done.
  • The main problem with the shortest job first algorithm is starvation. If there is a steady supply of short process, the long process may never get the chance to be executed by the processor.
  • First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes but starvation is not possible because short process have it's turn to complete in FCFS scheduling.
Question 258
The difference between a named pipe and a regular file in Unix is that
A
Unlike a regular file, named pipe is a special file
B
The data in a pipe is transient, unlike the content of a regular file
C
Pipes forbid random accessing, while regular files do allow this.
D
All of the above
Question 258 Explanation: 
  • Almost everything in Linux can be considered a file, but the main difference between a regular file and a named pipe is that a named pipe is a special instance of a file that has no contents on the filesystem.
  • A FIFO special file (a named pipe) is similar to a pipe, except that it is accessed as part of the filesystem. It can be opened by multiple processes for reading or writing. When processes are exchanging data via the FIFO, the kernel passes all data internally without writing it to the filesystem.
  • Thus, the FIFO special file has no contents on the filesystem; the filesystem entry merely serves as a reference point so that processes can access the pipe using a name in the filesystem.
  • The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process.
  • The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also.
  • So actually a named pipe does nothing until some process reads and writes to it. It does not take any space on the hard disk (except a little bit of meta information), it does not use the CPU.
Question 259
The term ‘aging’ refers to
A
Booting up the priority of the process in multi-level of queue without feedback
B
Keeping track of the following a page has been in memory for the purpose of LRU replacement
C
Letting job reside in memory for a certain amount of time so that the number of pages required can be estimated accurately
D
Gradually increasing the priority of jobs that wait in the system for a long time to remedy infinite blocking
Question 259 Explanation: 
  • Aging’ refers to gradually increasing the priority of jobs that wait in the system for a long time to remedy infinite blocking.
  • A major problem with priority scheduling is indefinite blocking or starvation
  • A solution to the problem of indefinite blockage of the low-priority process is aging.
  • Aging is a scheduling technique used to avoid starvation. In this technique, the priority of the jobs that have a longer waiting time is increased as compared to the newer processes, to avoid the starvation of older processes.
Question 260
Consider a set of n tasks with known runtimes r1, r2….rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
A
Round Robin
B
Shortest job first
C
Highest response ratio next
D
First come first served
Question 260 Explanation: 
-Throughput means total number of tasks executed per unit time.
-If the number of processors are increased then amount of work done is increased and time is decreased.
-In Shortest Job First scheduling algorithm has maximum throughput because in this scheduling technique the processor selects the process with the smallest execution time to execute next. Hence, maximum number of tasks are completed.

Note :
  • Highest Response Ratio Next policy favors shorter jobs but it also limits the waiting time of longer jobs.
  • Priority scheduling is a non-preemptive scheduling algorithm.
  • Round-robin scheduling is a preemptive scheduling algorithm.
  • Shortest job first scheduling results in maximum throughput.
  • First come first serve scheduling algorithm is the simplest scheduling algorithm.
Question 261
Consider a job scheduling problem with 4 jobs J1, J2, J3, J4 and with corresponding deadlines: ( d1, d2, d3, d4) = (4, 2, 4, 2).
Which of the following is not a feasible schedule without violating any job schedule?
A
J2, J4, J1, J3
B
J4, J1, J2, J3.
C
J4, J2, J1, J3.
D
J4, J2, J3, J1
Question 261 Explanation: 
***Option(B) is correct answer***
  • To optimize and to get a feasible solution we have to complete all the jobs with in their deadline.
  • From the deadline, we can deduce that Job J2 & J4 will complete by time 2 whereas remaining two requires time 4
  • So the order of completion of Jobs are Either J2 or J4 and followed by either J1 or J3.
  • Options A.C,D gives the solution because after completion of Jobs J2 and J4 then only jobs J1 and J3 are going to complete So no problem with option A,C,D
  • In option(B) order of completing jobs is J4,J1,J2,J3 which is not feasible solution
    Because
  • J4 is scheduled and need to be completed by 2 - no problem
  • J1 is scheduled and need to be completed by 4 - no problem
  • But now as J2 should come before time 2 but we scheduled J1. so J2 can't be scheduled leads to a unfeasible solution.
Question 262
Round Robin schedule is essentially the preemptive version of
A
FIFO
B
Shortest job first
C
Shortest remaining time
D
Longest remaining time
Question 262 Explanation: 
In round-robin scheduling each process is assigned a fixed time slot in a cyclic way. So Therefore, round-robin scheduling algorithm is the preemptive version of FIFO scheduling.
Question 263
What is the name of the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them?
A
Partitioning
B
Multi-tasking
C
Windowing
D
Paging
Question 263 Explanation: 
In Multitasking the operating system of a computer executes several programs concurrently by switching back and forth between them.
multi-tasking
Question 264
Virtual memory is
A
Part of Main Memory only used for swapping
B
A technique to allow a program, of size more than the size of main memory, to run
C
Part of secondary storage used in program execution
D
None of these
Question 264 Explanation: 
Virtual memory is a technique to allow a program, of size more than the size of main memory, to run
virtual memory
Question 265
Disk requests are received by a disk drive for cylinder 5, 25, 18, 3, 39, 8 and 35 in that order. A seek takes 5 msec per cylinder moved. How much seek time is needed to serve these requests for a Shortest Seek First (SSF) algorithm ? Assume that the arm is at cylinder 20 when the last of these requests is made with none of the requests yet served
A
125 msec
B
295 msec
C
575 msec
D
750 msec
Question 265 Explanation: 
 Shortest Seek First (SSF) algorithm
Question 266
Consider a system having ‘m’ resources of the same type. The resources are shared by 3 processes A, B, C, which have peak time demands of 3, 4, 6 respectively. The minimum value of ‘m’ that ensures that deadlock will never occur is
A
11
B
12
C
13
D
14
Question 266 Explanation: 
A requires 3, B requires 4, C requires 6

In worst case, The number of units that each process holds = One less than its maximum demand
  • Process A holds 2 units of resource R
  • Process B holds 3 units of resource R
  • Process C holds 5 units of resource R

Maximum number of units of resource R that ensures deadlock = 2 + 3 + 5 = 10
If we assign 1 more resource to any of the process that process use these resource and complete its operations  and then it release all its resources which are holding by this process.
Minimum number of units of resource R that ensures no deadlock = 10 + 1 = 11

So, any number of units greater than 11 will ensure no deadlock.

Note : They are asking Minimum value so 11 is the correct answer
Question 267
A task in a blocked state
A
Is executable
B
Is running
C
Must still be placed in the run queues
D
Is waiting for some temporarily unavailable resources
Question 267 Explanation: 
A task in a blocked state is waiting for same temporarily unavailable resources

Process-state-transition-diagram process state transition diagram
Question 268
Semaphores
A
Synchronize critical resources to prevent deadlock
B
Synchronize critical resources to prevent contention
C
Are used to do I/O
D
Are used for memory management
Question 268 Explanation: 
Semaphores are used to Synchronize critical resources to prevent contention
Semaphores in Process Synchronization
Question 269
On a system using non-preemptive scheduling, processes with expected run times of 5, 18, 9 and 12 are in the ready queue. In what order should they be run to minimize wait time?
A
5, 12, 9, 18
B
5, 9, 12, 18
C
12, 18, 9, 5
D
9, 12, 18, 5
Question 269 Explanation: 
The processes should execute in SJF manner to get the less waiting time. Therefore, the order should be 5, 9, 12, 18.
Question 270
The number of page frames that must be allocated to a running process in a virtual memory environment is determined by
A
The instruction set architecture
B
Page size
C
Number of processes in memory
D
Physical memory size
Question 270 Explanation: 
  • The number of page frames that must be allocated to a running process in a virtual memory environment is determined by its instruction set architecture.
  • If you have no indirect addressing then you need at least two pages in physical memory. One for instruction (code part) and another for if the data references memory.
  • If there is one level of indirection then you will need at least three pages one for the instruction(code) and another two for the indirect addressing. If there three indirection then minimum 4 frames are allocated.
Refer : http://stackoverflow.com/questions/11213013/minimum-page-frames
Question 271
Consider a small 2-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8
A
2
B
3
C
4
D
5
Question 271 Explanation: 
Two-way set-associative
Question 272
Which of the following RAID level provides the highest Data Transfer Rate (Read/Write)
A
RAID 1
B
RAID 3
C
RAID 4
D
RAID 5
Question 272 Explanation: 
RAID levels comparison chart

Refer : htt