# Algorithms | Subject Wise Questions

 Question 1
The following paradigm can be used to find the solution of the problem in minimum time: Given a set of non-negative integer, and a value K, determine if there is a subset of the given set with the sum equal to K:
 A Divide and Conquer B Dynamic Programming C Greedy Algorithm D Branch and Bound
Question 1 Explanation:
Given problem is Subset-sum problem which required dynamic programming to solve So, Option B is Correct

Subset Sum Problem - Dynamic Programming
Given a set of Positive integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum.

For Example:
Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 9
Output: True
There is a subset (4, 5) with sum 9.

Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 30
Output: False
There is no subset that add up to 30.
 Question 2
Given two sorted list of size m and n respectively. The number of comparisons needed the worst case by the merge sort algorithm will be
 A M*n B Maximum of m and n C Minimum of m and n D M+n–1
Question 2 Explanation:
The number of comparisons needed in the worst case by the merge sort algorithm will be m+n-1
Best case it is min(m, n) if we consider number of elements in one array is m and n in one array
Worst case it will take m+n-1 comparisons..when alternate element goes into 3rd array which is merged and sorted array.
Number of merges will be same in all cases which is M+N
 Question 3
Of the following sorting algorithms, which has a running time that is least dependent on the initial ordering of the input?
 A Merge Sort B Insertion Sort C Selection Sort D Quick Sort
Question 3 Explanation:
Quick sort - In Quick sort, if the array is already sorted or if it is reverse sorted then it takes O(n2) and if array is not sorted then it will take o(nlogn).so it depend upon ordering of input.

Insertion sort - In Insertion sort if the array is already sorted then it takes O(n) and if it is reverse sorted then it takes O(n2) to sort the array.so it depend upon ordering of input.

Selection sort - The best and worst case performance of Selection is O(n2) only. it does not depend upon ordering of input.

Merge sort - In Merge sort best and worst time complexity o(nlogn). it dosent depend ordering of input. .like array is sorted or not.

Best[merge] = Worst[merge] and Best[selection] = Worst[selection]. Why aren't they equally good answers ?
The best and worst case performance of Selection is O(n2) only. But if the array is already sorted then less swaps take place. In merge sort, time complexity is O(nlogn) for all the cases and performance is affected least on the the order of input sequence.
 Question 4
The running time of an algorithm is given by
T(n)=T(n−1)+T(n−2)−T(n−3), if n>3
= n, otherwise
Then what should be the relation between T(1), T(2) and T(3) , so that the order of the algorithm is constant?
 A T(1) = T(2) = T(3) B T(1) + T(3) = 2 * T(2) C T(1) – T(3) = T(2) D T(1) + T(2) = T(3)
Question 4 Explanation:
Order of the algorithm for which below recurrence is applicable for the time complexity, is a constant,
T(n) = T(n−1)+T(n−2)−T(n−3) (Assume Sufficient Large n value)

T(n) = T(n−1)
T(n)    = T(n)+T(n−2)−T(n−3)
T(n−2)=T(n−3)
'
'
'
'
T(1)=T(2)=T(3)
 Question 5
The time complexity of computing the transitive closure of binary relation on a set of n elements is known to be
 A O(n) B O(n log n) C O(n3/2) D O(n3)
Question 5 Explanation:
Transitive closure can be found by Floyd Warshall's algorithm in O(n3)

Warshall's Algorithm
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.

Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n-by-n boolean matrix T, in which the element in the ith row and jth column is 1 if there exist a directed path from the ith vertex to the jth vertex, otherwise it is zero.
 Question 6
The level of aggregation of information required for operational control is
 A Detailed B Aggregate C Qualitative D None of the above
Question 6 Explanation:
For operational control is Detailed
For strategic planning is Aggregate
 Question 7
Selection sort algorithm design technique is an example of
 A Greedy method B Divide-and-conquer C Dynamic Programming D Backtracking
Question 7 Explanation:
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from the unsorted part and putting it at the beginning.

The algorithm maintains two subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.

In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.

Example
arr[] = 64 25 12 22 11
# Placing the minimum element in arr[0...4] in the beginning
11 25 12 22 64

# Placing the minimum element in arr[1...4] in the beginning
11 12 25 22 64

# Placing the minimum element in arr[2...4] in the beginning
11 12 22 25 64

# Placing the minimum element in arr[3...4] in the beginning
11 12 22 25 64
 Question 8
The average case and worst case complexities for Merge sort algorithm are
 A O(n2), O(n2) B O(n2), O(nlog2n) C O(n log2n), O(n2) D O(n log2n), O(n log2n)
Question 8 Explanation:
 Question 9
The time taken by binary search algorithm to search a key in a sorted array of n elements is
 A O ( log2n ) B O ( n2 ) C O ( n ) D O ( n log2n )
Question 9 Explanation:
Binary Search is applied on the sorted array or list of large size. It's time complexity of O(log n) makes it very fast as compared to other sorting algorithms. The only limitation is that the array or list of elements must be sorted for the binary search algorithm to work on it.

Worst-case performance = O(log n)
Best-case performance = O(1)
Average performance = O(log n)
Worst-case space complexity = O(1)
 Question 10
The Fibonacci sequence is the sequence of integers
 A 1, 3, 5, 7, 9, 11, 13 B 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 C 0, 1, 3, 4, 7, 11, 18, 29, 47 D 0, 1, 3, 7, 15
Question 10 Explanation:
The Fibonacci Sequence is the series of numbers :

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

The next number is found by adding up the two numbers before it:
• the 2 is found by adding the two numbers before it (1+1),
• the 3 is found by adding the two numbers before it (1+2),
• the 5 is (2+3),
• and so on!

### Example: the next number in the sequence above is 21+34 = 55

 Question 11
The problems 3-SAT and 2-SAT are
 A Both NP-complete B Both in P C NP-complete and in P, respectively D Undecidable and NP-complete, respectively
Question 11 Explanation:
• 3-SAT is NP Complete Problem
• 2-SAT is P Class Problem

 Question 12
The recurrence relation that arises in relation with the complexity of binary search is:
 A T(n)=2T(n/2)+k , where k is constant B T(n)=T(n/2) +k, where k is constant C T(n)=T(n/2)+logn D T(n)=T(n/2)+n
Question 12 Explanation:
Binary Search: Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.

Binary Search is a linear searching algorithm and takes O(log n) when array is sorted.
T(n) = T(n / 2) + k , where k is constant produces a complexity of O(log n) Recurrence Relation Derivation

Time complexity is - O(log N)
Recurrence relation - T(n)=T(n/2)+1

T(n)=T(n/2) + 1
T(n/2)=T(n/4) + 1 ……
[ T(n/4)= T(n/2^2) ]

T(n/4)=T(n/8) + 1 ……
[ T(n/8)= T(n/2^3) ]
.
.
kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times)
Adding all the equations we get, T(n) = T(n/2^k) + k times 1 _____eq(final)
=> n/2^k= 1 [So how many times we need to divide by 2 until we have only one element left]
=> n=2^k
=> log n=k [taken log(base 2) on both sides ]
Put k= log n in eq(final)
T(n) = T(1) + log n
T(n) = 1 + log n [we know that T(1) = 1 ,
because it’s a base condition as we are left with only one element in the array and that is the element to be searched so we return 1]
T(n) = O(log n) [taking dominant polynomial, which is n here)
 Question 13
Which one of the following in-place sorting algorithms needs the minimum number of swaps?
 A Insertion Sort B Quick Sort C Heap Sort D Selection Sort
Question 13 Explanation:
In-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is O(log n), though sometimes anything in o(n) (Smaller than linear) is allowed

Selection Sort is an in-place algorithm having minimum number of swaps. It works on greedy approach and takes O(n) swaps to sort the array of n elements.

Number of swaps in Worst-case scenario
• Quick sort = O(n2)
• Insertion sort = О(n2)
• Selection sort = O(n)
• Heap sort = O(n logn)
Selection sort always performs O(n) swaps, while insertion sort performs O(n2) swaps in the average and worst case.
Selection sort is preferable if writing to memory is significantly more expensive than reading.

Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. Because it only uses comparisons to operate on elements, it is a comparison sort.
 Question 14
The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order, using bubble sort is
 A 11 B 12 C 13 D 10
Question 14 Explanation:
1 :8,7,22,9,31,5,13
2 :8,7,9,22,31,5,13
3 :8,7,9,22,5,31,13
4 :8,7,9,22,5,13,31 = This will take 4 swaps
5 :7,8,9,22,5,13,31
6 :7,8,9,5,22,13,31
7 :7,8,9,5,13,22,31 = This will take 3 swaps
8 :7,8,5,9,13,22,31 = This will take 1 swaps
9 :7,5,8,9,13,22,31 = This will take 1 swap
10 :5,7,8,9,13,22,31 = This will take 1 swap
Total 10 swaps are required to sort the array.
 Question 15
Which of the following algorithm solves the all-pairs shortest path problem?
 A Prim’s algorithm B Dijkstra's algorithm C Bellman-Ford’s algorithm D Floyd-Warshall’s algorithm
Question 15 Explanation:
Prim’s Algorithm : Prim’s Algorithm is used to the Minimum Spanning Tree (MST) of a given graph. To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.
Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from source to all the other nodes in a weighted graph.
BellmanFord’s Algorithm :BellmanFord’s Algorithm is used to find the shortest distance from source to all other nodes in a graph that can contain negetive weight edges.
Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.
 Question 16
The time complexity of computing the transitive closure of binary relation on a set of n elements is known to be
 A O(nlogn) B O(n3/2) C O(n3) D O(n)
Question 16 Explanation:
Transitive closure can be found by Floyd Warshall's algorithm in O(n3)

Warshall's Algorithm
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.

Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n-by-n boolean matrix T, in which the element in the ith row and jth column is 1 if there exist a directed path from the ith vertex to the jth vertex, otherwise it is zero.
 Question 17
Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?
 A Dynamic programming B Backtracking C Greedy D Divide and Conquer
Question 17 Explanation:

Foyd Warshall algorithm is used to find All Pairs Shortest Path problem which uses Dynamic Programming to find the shortest path between all the pairs of vertices in a weighted graph. This algorithm works for both the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles (where the sum of the edges in a cycle is negative).

A weighted graph is a graph in which each edge has a numerical value associated with it.

Floyd-Warhshall algorithm is also called as Floyd's algorithm, Roy-Floyd algorithm, Roy-Warshall algorithm, or WFI algorithm.
-This algorithm follows the dynamic programming approach to find the shortest paths.
 Question 18
How many comparisons are needed to sort an array of length 5 if a straight selection sort is used and array is already in the opposite order?
 A 1 B 5 C 10 D 20
Question 18 Explanation:
Selection sort does the same number of comparisons no matter what the initial order is.That ALWAYS uses n(n-1)/2 comparisons.
So, for length 5,

5*4/2 = 20/2 = 10 Comparisons

Example: 5 4 3 2 1
1st iteration i will compare 4 number with the 5
2nd iteration i will compare 3 numbers with the 4
3rd iteration i will compare 2 number2 with the 3
4th iteration i will compare 1 number with the 2

so the total number of comparisons is 4+3+2+1 = 10
which can be views as the sum of the sequence of the first (n-1) numbers starting by 1
S = ((1+ (n-1) ) * (n-1) ) / 2
S=n(n-1)/2 comparisons.
 Question 19
Consider the recurrence equation
T(n) = 2T(n-1), if n>0
= 1, otherwise
Then T(n) is (in big O order)
 A O(n) B O(2n) C O(1) D O(log n)
Question 19 Explanation:
T(n)=2T(n−1)
T(n) = 2[2T(n−2)]
T(n) =22T(n−2)
T(n)=22[2T(n−3)]
T(n)=23T(n−3)
T(n)= 2kT(n−k)
n−k=0, n=k, T(0)=1
T(n)=2n
 Question 20

Match the following and choose the correct Solution for the order A, B, C, D

A. p. Strassen matrix multiplication Decrease and Conquer Insertion sort Dynamic Programming Guassian Elimination Divide and Conquer Floyd shortest path algorithm Transform and Conquer

Code:
 A r, s, p, q B r, p, s, q C q, s, p, r D s, p, q, r
Question 20 Explanation:
Strassen matrix multiplication uses Divide and Conquer technique to reduce the complexity of matrix multiplication.

Insertion sort uses decrease and conquer approach as its loop invariant condition is at each step, A[1..j-1] contains the first j-1 elements in sorted order.

Gaussian elimination uses transform and conquer approach to solve set of equations.

Foyd Warshall algorithm is used to find All Pairs Shortest Path problem which uses Dynamic Programming to find the shortest path between all the pairs of vertices in a weighted graph.
 Question 21
The average number of key comparisons required for a successful search for sequential search on items is
 A N/2 B (n-1)/2 C (n+1)/2 D None of these
Question 21 Explanation:
Sequential search: Searching an element in an array, the search starts from the first element till the last element.

The average number of comparisons in a sequential search is (N+1)/2 where N is the size of the array.

-If element is at 1 position then it requires 1 comparison.
-If element is at 2 position then it requires 2 comparison.
-If element is at 3 position then it requires 3 comparison.
-Similarly , If element is at n position then it requires n comparison.

Total comparison
= n(n+1)/2

For average comparison
= (n(n+1)/2) / n
= (n+1)/2
 Question 22
Algorithm design technique used in quicksort algorithm is?
 A Dynamic programming B Backtracking C Divide and conquer D Greedy method
Question 22 Explanation:
Quicksort algorithm based on divide and conquer approach in which the array is split into subarrays and these sub-arrays are recursively called to sort the elements.
 Question 23
Consider the following recurrence:
T(n) = 2T(√n) + 1
T(1) = 1
Which of the following is true?
 A T(n)= O(log log n) B T(n) = O(log n) C T(n) = O(√n) D T(n)= O(n)
Question 23 Explanation:
Let n=2m
T(n)=2T(√n)+1
T(2m)=2T(2m/2)+1
Let T(2m)=s(m)
s(m)=2s(m/2)+1

By Using case 1 of master method
S(m)=Θ(m)
S(m) =Θ(logn) [∴ n=2m]

∴ T(n)=T(2m)=s(m)=Θ(logn)
 Question 24
Number of comparisons required for an unsuccessful search of an element in a sequential search, organized, fixed length, symbol table of length L is
 A L B L/2 C (L+1)/2 D 2L
Question 24 Explanation:
Symbol table is implemented as a sequential search table. So, every time one has to search entire table.

In case of unsuccessful search, the element would be searched until the last element and it would be a worst case when number of searches are equal to the size of the table. so it shoulb be L only.

-In sequential search Successful search needs (L+1)/2,
-In sequential search Unsuccessful search needs L
 Question 25
Let T(n) be defined by T(1) = 10 and T(n + 1) = 2n + T(n) and for all integers n ≥ 1 . Which of the following represents the order of growth of T(n) as a function of
 A O(n) B O(n log n) C O(n2) D O(n3)
Question 25 Explanation:
T(n + 1) = 2n + T(n) for all integers n ≥ 1
T(n + 1) = 2n + (2(n-1) + T(n-1)) ∴ [By using substitution method]
T(n + 1) = 2n + (2(n-1) + (2(n-2) + T(n-2)))
T(n + 1) = 2n + (2(n-1) + (2(n-2) + (2(n-3) + T(n-3))))
T(n + 1) = 2n + 2(n-1) + 2(n-2) + 2(n-3)......2(n-(n-1) + T(1))
T(n + 1) = 2n + 2n - 2 + 2n - 4 + 2n - 6 +.... + 10
T(n + 1) = 2[n + n + n + ...] - 2[1 + 2 + 3 +...]
T(n + 1) = 2[n*n] - 2[n(n+1)/2]
T(n + 1) = 2[n*n] - [n*n + n]
T(n + 1) = n*n - n
T(n + 1) = O(n2)
 Question 26
Which of the following algorithm design technique is used in merge sort?
 A Greedy method B Backtracking C Dynamic programming D Divide and Conquer
Question 26 Explanation:
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time complexity being Ο(n log n)

Merge sort first divides the array into equal halves and then combines them in a sorted manner.
 Question 27
Which of the following sorting algorithms has the minimum running time complexity in the best and average case?
 A Insertion sort, Quick sort B Quick sort, Quick sort C Quick sort, Insertion sort D Insertion sort, Insertion sort
Question 27 Explanation: According to the question
1. Insertion sort (n) , Quick sort( n logn)
2. Quick sort (n logn), Quick sort(n logn)
3. Quick sort (n logn)), Insertion sort(n2)
4. Insertion sort (n), Insertion sort(n2)
So, we can conclude option(A) is correct
 Question 28
Suppose there are 11 items in sorted order in an array. How many searches are required on the average, if binary search is employed and all searches are successful in finding the item?
 A 3 B 3.46 C 2.81 D 3.33
Question 28 Explanation:
Consider an sorted array of 11 element
1 2 3 4 5 6 7 8 9 10 11
For convenience we are using Binary search tree. Now apply binary search in the given array .
For first element (which is root) array will be
(1  2   3   4   5   )  6  (7   8   9   10   11) Now binary search continue either in lower or in upper halve for searching .
For lower halve (1   2)  3   (4   5 ).

For upper halve (7   8  )  9  (10   11  ) and continued in same way. So to search middle element (i.e 6  ) it will take 1 comparison .For 3  or 9  it will take 2 comparison .For 2 , 3 comparison and so on.

Avg No of comparison = ( 1+(2+2)+(3+3+3+3)+(4+4+4+4) ) / 11=3
 Question 29

The solution of the recurrence relation T(m) = T(3m/4) + 1 is :

 A Θ(lg m) B Θ(m) C Θ(mlg m) D Θ(lglg m)
Question 29 Explanation:
Masters theorem
T(n) = a T(n/b) + Θ(nk logp n)  where a > = 1,
b > 1, k > = 0 , and p is real number

Case 1: if a > bk, Then T(n) =Θ(n^(logba))

Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))

Case 3: a < bk,
a)  if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)

Given
T(m) = T(3m/4) + 1
T(m) = T(m/(4/3)) + 1

In this problem we use Master’s Theorem:
T(m) = aT(m/b) + nk logp n
which is similar to
T(m) = 1 * T(m/(4/3)) + 1 * n0 log0 n
where a = 1, b = 4/3, k = 0 and p = 0

By using case 2 of Master's Theorem
a = 1 = b0 = bk
∴ a=bk

p > -1 means apply case 2(a) of Master's Theorem

∴ T(m) = θ( m(logba)log(p+1)m)
where  a=1, b=4/3, k=0 and p=0
∴ T(m) = θ( m(log4/31)log(0+1)m)    (∴  log4/31 = 0 )
∴ T(m) = θ( m0log1m)
∴ T(m) = θ(log m)
 Question 30
A text is made up of the characters A, B, C, D, E each occurring with the probability
0.08, 0.40, 0.25, 0.15 and 0.12 respectively.
The optimal coding technique will have the average length of :
 A 2.4 B 1.87 C 3 D 2.15
Question 30 Explanation:
Given Probability  :
A → 0.08
B → 0.40
C → 0.25
D → 0.15
E → 0.12 Now Draw a Huffman tree: length for each character = no of bits * frequency of occurrence:
Length of A = 1110 = 4 * 0.08 = 0.32
Length of B = 0 = 1 * 0.4 = 0.4
Length of C = 10 = 2 * 0.25 = 0.5
Length of D = 110 = 3 * 0.15 = 0.45
Length of E = 1111 = 4 * 0.12 = 0.48

Now add these length to find Average length:
Average length = 0.32 + 0.4 + 0.5 + 0.45 + 0.48 = 2.15
 Question 31

A binary search tree in which every non-leaf node has non-empty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :

 A Cannot have more than 37 nodes B Has exactly 37 nodes C Has exactly 35 nodes D Cannot have more than 35 nodes
Question 31 Explanation:
• If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is called a strictly binary tree.
• A strictly binary tree with n leaves always contains 2n -1 nodes.
• • If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is termed a strictly binary tree. Or, to put it another way, all of the nodes in a strictly binary tree are of degree zero or two, never degree one.
Now we have 19 leaves, Therefore (2 x 19) - 1 = 38 - 1 = 37  nodes
 Question 32

A binary search tree in which every non-leaf node has non-empty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :

 A Cannot have more than 37 nodes B Has exactly 37 nodes C Has exactly 35 nodes D Cannot have more than 35 nodes
Question 32 Explanation:
• If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is called a strictly binary tree.
• A strictly binary tree with n leaves always contains 2n -1 nodes.
• • If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is termed a strictly binary tree. Or, to put it another way, all of the nodes in a strictly binary tree are of degree zero or two, never degree one.
Now we have 19 leaves, Therefore (2 x 19) - 1 = 38 - 1 = 37  nodes
 Question 33
 A (a)-(iv), (b)-(i), (c)-(iii), (d)-(ii) B (a)-(iv), (b)-(iii), (c)-(i), (d)-(ii) C (a)-(iii), (b)-(iv), (c)-(ii), (d)-(i) D (a)-(iv), (b)-(iii), (c)-(ii), (d)-(i)
Question 33 Explanation:
1. 8-Queens‏‏ Problem is ‎Backtracking.
2. Single-Source shortest‏‏‎ Paths‏‏‎ is Greedy‏‏‎ approach.‏‏‎
3. STRASSEN’s Matrix‏‏‎ Multiplication‏‏‎ is ‏‏‎ Divide‏‏‎ and‏‏‎ conquer‏‏‎ technique.‏‏‎
4. Optimal Binary‏‏‎ Search‏‏‎ trees‏‏‎ is Dynamic‏‏‎ programming.
 Question 34
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
 A 45 B 72 C 360 D 450
Question 34 Explanation:
• An octal number is a type of number that is composed of 8 possible values (0 to 7).
• But in this Questions They are saying 5 Digit octal Number possible values (0 to 4) it means Number of Digits is 5
Maximum comparisons = (Digits) x (Type of Number) x (Number of values)
where
Digits =5,
Type of Number(Base Value) =8 ,
Number of values=9

Therefore Maximum Number of comparisons are = 5* 8 * 9 = 360
 Question 35
A 5-ary tree is tree in which every internal node has exactly 5 children. The number of leaf nodes in such a tree with 8 internal nodes will be :
 A 30 B 33 C 45 D 125
Question 35 Explanation:
For an k-ary tree where each node has k children or no children,
following relation holds
L = (k-1)*n + 1

Where
L is the number of leaf nodes
n is the number of internal nodes.
since its a complete k tree, so every internal node will have K child

Given
k = 5
Number of internal nodes n = 8
Number of leaf nodes = (k-1)*n + 1
= (5-1)*8 + 1
= 4 * 8 + 1
= 33
 Question 36

E is the number of edges in the graph and f is maximum flow in the graph. When the capacities are integers, the runtime of Ford-Fulkerson algorithm is bounded by :

 A O(E∗f) B O(E2∗f) C O(E∗f2) D None of the above
Question 36 Explanation:
Ford-Fulkerson has a complexity of O(|E|.f* ), where f is the maximum flow of the network. The Ford-Fulkerson algorithm was eventually improved upon by the Edmonds-Karp algorithm, which does the same thing in O(V2⋅E) time, independent of the maximum flow value
 Question 37
For the given recurrence equation<
T(n)=2T(n-1), if n>0.
=1, otherwise

 A O(nlogn) B O(n2) C O(2n) D O(n)
ISRO CS 2017
Question 37 Explanation:
T(n)=2T(n−1)
T(n) = 2[2T(n−2)]
T(n) =22T(n−2)
T(n)=22[2T(n−3)]
T(n)=23T(n−3)
T(n)= 2kT(n−k)
n−k=0, n=k, T(0)=1
T(n)=2n
 Question 38
The smallest element that can be found in time___ in a binary max heap
 A O(nlogn) B O(logn) C O(n) D O(n2)
Algorithms    ISRO-CS
Question 38 Explanation:
In Binary Max Heap, the smallest element will always be a leaf node either in left subtree or right subtree. So you have to traverse both left and right subtrees In each step you need traverse both left and right sub-trees in order to search for the minimum element. In effect, this means that you need to traverse all elements to find the minimum.

So we need to check for all leaf nodes for the minimum value. Worst case complexity will be O(n)
 Question 39
___ is the worst case time complexity for all operations(i.e. Search,update and delete) in a general binary search tree
 A O(n) B O(nlogn) C O(logn) for search and insert, and O(n) for delete D None of these
Algorithms     NIELIT 2018-44
Question 39 Explanation:
Worst case
Worst-case time complexity for Binary Search tree operation occurs when the tree is a Skewed tree.i.e when tree goes only in one direction

In skewed Binary Search Tree (BST), all operations Search,update and delete can take O(n). Best case,
The binary search tree is a balanced binary search tree.
Height of the binary search tree becomes log(n).
So, Time complexity of BST Operations = O(logn). Question 40
Dijkstra’s algorithm is based on:
 A Greedy approach B Dynamic programming C Backtracking paradigm D Divide and conquer paradigm
Algorithms     NIELIT 2018-45
Question 40 Explanation:
• Dijkstra's Algorithm is a single-source shortest path algorithm and aims to find solution to the given problem statement
• This algorithm works for both directed and undirected graphs
• It works only for connected graphs
• The graph should not contain negative edge weights
• The algorithm predominantly follows Greedy approach for finding locally optimal solution.
• The main logic of this algorithm is based on the following formula- `dist[r]=min(dist[r], dist[q]+cost[q][r])`
 Question 41
For the given nodes:
89,19,50,17,12,15,2,5,7,11,6,9,100
Minimum _____ number of interchanges are needed to convert it into a max-heap
 A 3 B 4 C 5 D 6
Algorithms     NIELIT 2018-46
Question 41 Explanation: Question 42
_____sorting algorithms has the lowest worst case complexity
 A Selection sort B Bubble sort C Merge sort D Quick sort
Algorithms     NIELIT 2018-47
Question 42 Explanation:
Merge Sort — nLogn
Bubble Sort — n2
Quick Sort — n2
Selection Sort — n2
 Question 43
Two alternative packages A and B are available for processing a database having 10k records. Package A requires 0.0001n​2​ time units and package B requires 10nlog​10​n time units​ ​ to process n records. What is the smallest value of k for which package B will be preferred over A?
 A 12 B 10 C 6 D 5
Question 43 Explanation:
10nlog10n ≤ 0.0001n2
Let n‏‏‎ =‏‏‎ 10​ k​ records.‏‏‎ Then Substitute the Values in above condition
10×(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
5 doesn't satisfy this but 6 satisfies.
 Question 44
What is the type of the algorithm used in solving the 4 Queens problem?
 A Greedy B Branch and bound C Dynamic Programming D Backtracking
Question 44 Explanation:
Backtracking is a general algorithm technique that consider searching every possible combination in order to solve an optimization problem.
Back tracking is an approach to problem solving which is usually applied to constraint satisfaction problem like puzzles.
The 4-Queens problem consist of placing four queens on a 4x4 chessboard such that no two queen’s capture the same column and same row and also, they should not be in the same diagonal.
 Question 45
Selection sort,quick sort is a stable sorting method
 A True,True B False, False C True,False D False,False
Question 45 Explanation:
A stable sorting algorithm ensures that items that are equal to each other aren't reordered during the sorting process.

An algorithm is said to be stable if two equal elements appear in the same order in output(sorted objects list) as they appear in input list(unsorted list)
For example - If we have 5 elements - 4, 7, 2, 8, 2
For our understanding, let’s rename sequence as 4, 7, 2(first occurrence), 8, 2(second occurrence) Then after sorting we will get 2, 2, 4, 7, 8 .But which 2 came first? Which 2 is this? 2(first occurrence) or 2(second occurrence) .If they are in the same order as input ,then such algorithms are called stable algorithm else unstable algorithm. These sorting algorithms are usually stable
• counting sort
• merge sort
• insertion sort
• Bubble Sort
• Binary Tree Sort.
These sorting algorithms are usually unstable:
• quicksort
• heapsort
• selection sort
Refer : https://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms
 Question 46
Which of the following sorting procedures is the slowest?
 A Quick sort B Merge sort C Shell sort D Bubble sort
Question 46 Explanation: Bubble sort procedures is the slowest in all 3 cases
 Question 47
The recurrence relation capturing the optimal execution time of the towers of Hanoi problem with n discs is:
 A T(n)=2T(n-2)+2 B T(n)=2T(n/2)+1 C T(n)=2T(n-2)+n D T(n)=2T(n-1)+1
Question 47 Explanation:
Recurrence relation for Towers of Hanoi
T(1)=1
T(n)=2T(n−1)+1
 Question 48
Complexity of kruskal's algorithm for finding minimum spanning tree of an undirected graph containing n vertices and m edges if the edges are sorted is:
 A O(mn) B O(m) C O(m+n) D O(n)
Question 48 Explanation:
Kruskal’s Algorithm is used for finding the Minimum Spanning Tree (MST) of a given graph only when the given graph must be weighted, connected and undirected.

If the edges are already sorted, then there is no need to construct min heap.
So, deletion from min heap time is saved.
In this case, time complexity of Kruskal’s Algorithm = O(E + V)
where E edges and V vertices
 Question 49
Two alternative package A and B are available for processing a database having 10​ k records. Package A requires 0.00012n​​ time units and package B requires 10nlog​​10 n time units to process n records. What is the smallest value of k for which package B will be preferred over A?
 A 12 B 10 C 6 D 5
Question 49 Explanation:
10nlog10n ≤ 0.0001n2
Let n‏‏‎ =‏‏‎ 10​ k​ records.‏‏‎ Then Substitute the Values in above condition
10×(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
5 doesn't satisfy this but 6 satisfies.
 Question 50
You are given the postorder traversal,p, of a binary search tree on the n elements 1,2,..,n. You have to determine the unique binary search tree has P as its postorder traversal. What is the time complexity of the most efficient algorithm for doing this?
 A O(logn) B O(n) C O(nlogn) D None of the above, as the tree cannot be ​ be uniquely determined.
Question 50 Explanation:
1. You have given n elements from 1 to n.
2. In BST(binary search tree), inorder traversal is always sorted ordered.

Algorithm to build binary tree from inorder and postorder traversal:
1. Take the last element from postorder as root element of binary tree. This takes O(1) time.
2. Perform binary search on given sequence 1 to n to find that element in inorder traversal. This takes O(log n) time.
3. Now divide inorder traversal using that element using pivot selection.
4. Repeat for both split part. This takes T(k) and T(n-k-1) time.
The recurrence relation would be T(n) = T(k) + T(n-k-1) + O(log n) = O(n log n). This is time complexity to build binary tree using inorder and postorder/preorder traversal.

As you have given that tree is binary search tree, so there is no need to perfrom binary search. Simply, we take pivot and split given list from 1 to n element. The recurrence relation would be T(n) = T(k) + T(n-k-1) + O(1) = O(n). O(n) time complexity to build binary search tree using inorder and postorder/preorder travarsal.
 Question 51
The most efficient algorithm for finding the number of connected components in a n undirected graph on n vertices and m edges has time complexity
 A O(n) B O(m) C O(m+n) D O(mn)
Question 51 Explanation:
you can find Number of connected components in undirected graph by running a DFS or BFS
Its time complexity is Θ(m+n)
 Question 52
An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array
 A Solves it in linear time using a left to right pass of the array B Solves in linear time using a right to left pass of the array C Solves it using divide and conquer in time O(nlogn) D Solves it in time O(n​ 2​ )
Question 52 Explanation:
For example array {6, 7, 4, 3, 5, 2}, leaders are 7, 5 and 2.

Method 1 (Brute Force)
Use two loops. The outer loop runs from 0 to length – 1 and one by one picks all elements from left to right. The inner loop compares the picked element to all the elements to its right side. If the picked element is greater than all the elements to its right side, then the picked element is the leader.
Time Complexity: O(n*n)

Method 2 (Scan the array from right)
Start Traversing from the right side of the array, and keep the highest element in a variable,if we find any element greater then this variable print it and update the variable with the new value.
Time Complexity: O(n)
 Question 53
Consider the following terminology and match List I with List II and choose the correct answer from the code given below. b= branching factor d = depth of the shallowest solution m= maximum depth of the search tree l=depth limit A (a)-(iii), (b)-(ii), (c)-(iv), (d)-(i) B (a)-(ii), (b)-(iii), (c)-(iv), (d)-(i) C (a)-(i), (b)-(ii), (c)-(iv), (d)-(iii) D (a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
Question 53 Explanation:
Branching factor
The branching factor is the number of children at each node

Solution depth
The length of the shortest path from the initial node to a goal node.

Maximum depth
The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
Note: A leaf is a node with no children.

Depth limit - The unbounded tree problem appeared in DFS can be fixed by imposing a limit on the depth that DFS can reach, this limit we will call depth limit.
• BFS →  O(bd) worst case space complexity
• DFS → O(bm) worst case space complexity
• Depth - Limited Search → O(bl)
• Iterative deepening Search → O(bd)
 Question 54
Consider the graph shown below : Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
 A 17 B 14 C 16 D 13
Question 54 Explanation:
Applying Kruskal's Algorithm then Weight of Minimum Spanning Tree(MST) is 16 2+1+2+1+3+4+2+1=16
 Question 55
Match List 1 with List 2 and choose the correct answer from the code given below : where V and E are the number of vertices and edged in graph respectively.
Code:
 A (a)-(i), (b)-(iii), (c)-(iv), (d)-(ii) B (a)-(i), (b)-(iii), (c)-(ii), (d)-(iv) C (a)-(iii), (b)-(i), (c)-(ii), (d)-(iv) D (a)-(iii), (b)-(i), (c)-(iv), (d)-(ii)
Question 55 Explanation:
Dijkstra’s algorithm Time Complexity: O(V2)
Case-1:
This case is valid when-
• The given graph G is represented as an adjacency matrix.
• Priority queue Q is represented as an unordered list.
• A[i,j] stores the information about edge (i,j).
• Time taken for selecting i with the smallest dist is O(V).
• For each neighbor of i, time taken for updating dist[j] is O(1) and there will be maximum V neighbors.
• Time taken for each iteration of the loop is O(V) and one vertex is deleted from Q.
• Thus, total time complexity becomes O(V2).
Case-2:
This case is valid when-
• The given graph G is represented as an adjacency list.
• Priority queue Q is represented as a binary heap.
• With adjacency list representation, all vertices of the graph can be traversed using BFS in O(V+E) time.
• In min heap, operations like extract-min and decrease-key value takes O(logV) time.
• So, overall time complexity becomes O(E+V) x O(logV) which is O((E + V) x logV) = O(ElogV)
• This time complexity can be reduced to O(E+VlogV) using Fibonacci heap.
Kruskal’s algorithm  Time Complexity: O(ElogE) or O(ElogV). Sorting of edges takes O(ELogE) time. After sorting, we iterate through all edges and apply find-union algorithm. The find and union operations can take atmost O(LogV) time. So overall complexity is O(ELogE + ELogV) time. The value of E can be atmost V^2, so O(LogV) are O(LogE) same. Therefore, overall time complexity is O(ElogE) or O(ElogV)

Floyd-Warshall algorithm:The Floyd-Warshall all-pairs shortest path runs in O(n3) time .This is because we have a triple (3) for loop, each with n iterations to make

Topological sorting: The above algorithm is simply DFS with an extra stack. So time complexity is same as DFS which is O(V+E).
 Question 56
The second smallest of n elements can be found with _______ comparisons in worst case.
 A N + ceil(log n) - 2 B N - 1 C Log n D 3n/1
Question 56 Explanation:
Let's take an Example :
Input : 1 2 3 4 5 6 7 8
output : 2 (2 is the second smallest number) 1 is the smallest, it took n-1 = 7 comparisons.
Now compare all the numbers which 1 was compared to (compare 2,3,5). 2 is the second smallest. It took log(n)-1 = 2 comparisons.

Hence,
N -1 + log(n)-1
= N + log(n)-2
= N + ceil(log n) – 2 ∴ [ when n odd]
 Question 57
Consider two sequences X and Y :
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >

The length of longest common subsequence between X and Y is

 A 4 B 2 C 3 D 5
Question 57 Explanation:
If there is match then A[i, j] = A[i – 1, j  – 1] + 1
If there is no match then max(A[i – 1, j], A[i, j – 1])
longest Common Subsequence Table:
 X 0 1 2 1 3 0 1 Y 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 1 3 0 0 1 1 1 2 2 2 2 0 0 1 2 2 2 2 2 0 0 1 1 2 2 2 3 3 1 0 1 2 2 3 3 3 4 0 0 1 2 2 3 3 4 4
Therefore, length of longest common subsequence is 4
 Question 58

The solution of recurrence relation : T(n) = 2T(sqrt(n)) + lg(n) is

 A O(lg (n) lg(n)) B O(lg (n)) C O(n lg (n)) D O(lg (n) lg(lg(n)))
Question 58 Explanation:
T(n)=2T(⎷n)+logn
T(n)=2T(n1/2)+logn
put n=2k
T(2k)=2T((2k)1/2)+k
T(2k)=2T(2k/2)+k
put T(2k)=S(k)
So S(k)=2S(k/2)+k
by Master Theorem
S(k)=O(k log k)
Hence T(n)=O(logn log logn)
 Question 59
The worst case running times of insertion sort, merge sort and quick sort respectively are
 A O(nlogn),O(nlogn) and O(n2) B O(n2),O(n2) and O(nlogn) C O(n2),O(nlogn) and O(nlogn) D O(n2),O(nlogn) and O(n2)
Question 59 Explanation: Question 60
Which of the following standard algorithms is not Dynamic Programming based ?
 A Bellman ford algorithm for single source shortest path B Floyd Warshall for all pairs shortest paths C 0-1 knapsack problem D Prim’s minimum spanning tree
Question 60 Explanation:
• Bellman ford algorithm for single source shortest path - Dynamic Programming based
• Floyd Warshall for all pairs shortest paths  - Dynamic programming Based
• 0-1 knapsack problem - Dynamic Programming based
• Prim's Minimum Spanning Tree - Greedy Algorithm.
 Question 61
Kadane algorithm is used to find
 A Maximum sum subsequence in an array B Maximum sum subarray in an array C Maximum product subsequence in an array D Maximum product subarray in an array
Question 61 Explanation:
Maximum Sum Subarray Problem (Kadane’s Algorithm)
Given an array of integers, find contiguous subarray within it which has the largest sum.

Example
Input:  {-2, 1, -3, 4, -1, 2, 1, -5, 4}

Output: Subarray with the largest sum is {4, -1, 2, 1} with sum 6.

We can easily solve this problem in linear time using kadane’s algorithm. The idea is to maintain maximum (positive sum) sub-array “ending” at each index of the given array. This subarray is either empty (in which case its sum is zero) or consists of one more element than the maximum subarray ending at the previous index.

The time complexity of Kadane’s Algorithm is O(n) and auxiliary space used by the program is O(1)
 Question 62
Given an random unsorted array ‘A’ in which every element is at most ‘d’ distance from it's position in sorted array where d < Size(A). If we applied the insertion sort over this array, then the time complexity of algorithm is:
 A O(nlogd) B O(n2 logd) C O(nd) D O(n2d)
Algorithms   Nielit STA [02-DEC-2018]
Question 62 Explanation:
For each element x in array
Maximum shifts is bounded by O(d)
Therefore for n elements total complexity is bounded above by O(n*d)
 Question 63
If G be an undirected connected graphs with distinct edge weight such that maximum weight and minimum weight of edge is given by emax and emin respectively. Then, Which of the following statements is TRUE?
 A Every minimum spanning tree of G must contain emin and emax B If emax is in a minimum spanning tree, then its removal must disconnect G C No minimum spanning tree contains emax D G has a multiple minimum spanning trees E None
Question 63 Explanation:
Option(A) : False
Every minimum spanning tree of G must contain emin but "may or may not" emax.
Kruskal's algorithm always picks the edges in ascending order of their weights while constructing a minimum spanning tree of G.First edge could never create a cycle.
Every minimum spanning tree of G "may or may not" contain emax explained in option(c)

Option(B) : False
If emax is in a minimum spanning tree, then its removal must disconnect G "not always true".
emax would be included in MST "if and only if , emax is a bridge between two connected components" , removal of which will surely disconnect the graph. Option(C) : False
No minimum spanning tree contains emax
should be written as "may or may not", to be true.
Refer Case 1 in option (B)

Option(D) : False
G has a multiple minimum spanning trees
G has unique edge weights , so MST will be unique . In case if edge weights were repeating , there could've been a possibility of non-unique MSTs.
 Question 64
Consider bottom up merge sort working on ‘n’ elements such that n=2i. The minimum number of comparisons in order to get sorted list is:
 A Nlog(n/2) B Nlogn C (nlogn)/2 D log n
Question 64 Explanation:
Minimum comparison is only possible when 1st and last element of either array are smaller then first element of another
so in such as case at each level we will have n/2 comparisons
In general they are
= n/2 * number of levels
=n/2 * log2n
=(nlogn)/2
For n=2 only 1 comparison
For n=4 only 4 Comparison and so on  Question 65
The time complexity of the Tarjan’s algorithm for finding the strongly connected components of a graph having m vertices and n edges is:
 A O(m*n) B O(m+n) C O(m2) D O(mn)
Question 65 Explanation:
Tarjan's Algorithm is an efficient graph algorithm to find the strongly connected components in a directed graph in linear time by utilizing Depth First Search traversal of a graph. The key idea used is that nodes of strongly connected component form a subtree in the DFS spanning tree of the graph.

Time complexity:
The algorithm is built upon DFS and therefore, each node is visited once and only once. For each node, we perform some constant amount of work and iterate over its adjanceny list. Thus, the complexity is O(|V|+ |E|)
At maximum, the depth of recursion and the size of stack can be n nodes. Thus the complexity is O(|V|)
 Question 66
Which of the following sorting algorithms does not have a worst case running time of O(n​ 2​ )
 A Insertion sort B Merge sort C Quick sort D Bubble sort
Question 66 Explanation: Question 67
If there is in NP-Complete language L whose complement is in NP, then complement of any language in NP is in
 A P B NP C Both (A) and (B) D None of these
Question 67 Explanation:
"If there is an NP-complete language L whose complement is in NP, then the complement of any language in NP is in NP."
This follows from the following three facts.
1)The complement of an NP-complete problem is co-NP-complete;
2)if any co-NP-complete problem is in a class C that is closed under polynomial-time reductions, then every co-NP⊆C;
3)NP is closed under polynomial-time reductions.
 Question 68
Time complexity of an algorithm T(n), where n is the input size is given by
T(n)=T(n-1)+1/n, if n>1
=1, otherwise
The order of this algorithm is
 A Logn B N C N​ 2 D N​ n
Question 68 Explanation:
T(n) = T(n−1) + 1/n Given
T(n-1)= T(n−2) + 1/(n−1) substitute in above equation
T(n)=T(n-2)+ 1/(n-1) + 1/n
T(n)=T(n-3) + 1/(n-2) + 1/(n-1) + 1/n
T(n)=T(n-4) + 1/(n-3) + 1/(n-2) + 1/(n-1) + 1/n
'
'
'
'
'
T(n) =T(n-k) + 1/n-(k-1) + 1/n-(k-2) + 1/n-(k-3) + .........+1/n
Assume T(1) =1
Substitute k = n-1
T(n) =T(1) + 1/(n-(n-2)) + 1/(n-(n-3)) + 1/(n-(n-4)) + 1/(n-(n-5))+......+1/n
T(n)=1 + 1/2 + 1/3 + 1/4 + 1/5 +...........+1/n [Which is in Harmonic series]
Therefore T(n)=O(logn)
 Question 69
To sort many large object or structures, it would be most efficient to
 A Place reference to them in and array an sort the array B Place them in a linked list and sort the linked list C Place pointers to them in an array and sort the array D Place them in an array and sort the array
Question 69 Explanation:
By using pointers we will easily access the large objects or structures and we can easly made any changes to the elements in the array it automatically reflect as these are pointed by the pointers.
 Question 70
Consider the following terminology and match List I with List II and choose the correct answer from the code given below.
b= branching factor
d = depth of the shallowest solution
m= maximum depth of the search tree
l=depth limit A (a)-(iii), (b)-(ii), (c)-(iv), (d)-(i) B (a)-(ii), (b)-(iii), (c)-(iv), (d)-(i) C (a)-(i), (b)-(ii), (c)-(iv), (d)-(iii) D (a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
Question 70 Explanation:
Branching factor
The branching factor is the number of children at each node

Solution depth
The length of the shortest path from the initial node to a goal node.

Maximum depth
The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
Note: A leaf is a node with no children.

Depth limit - The unbounded tree problem appeared in DFS can be fixed by imposing a limit on the depth that DFS can reach, this limit we will call depth limit.
• BFS →  O(bd) worst case space complexity
• DFS → O(bm) worst case space complexity
• Depth - Limited Search → O(bl)
• Iterative deepening Search → O(bd)
 Question 71
In a ternary tree, the number of internal nodes of degree 1,2 and 3 is 4,3 and 3 respectively. The number of leaf nodes in the ternary tree is
 A 11 B 12 C 10 D 9
Question 71 Explanation:
The degree of the tree is the total number of it's children i-e the total number nodes that originate from it. The leaf of the tree doesnot have any child so its degree is zero

It is given that:
The nodes with degree 1 is 4. ( 4 nodes having 1 child,therefore, degree = internal nodes + the root = 1+1=2)
The nodes with degree 2 is 3. ( 3 nodes having 2 child, therefore, degree = internal nodes + the root = 2+1=3)
The nodes with degree 3 is 3. ( 3 nodes having 3 children,therefore, degree = internal nodes + the root = 3+1=4)
1 root node with degree 3(number of children and because ternary and root has no parent --> degree == 3 + 0 = 3)

Let the number of leaf nodes be N. The degree of those leaf nodes is 1 because they are only attached to their parent.

Total number of nodes in tree = internal nodes + leafs node = 4 + 3 + 3 + N = 10 + N.

Therefore, the total number of edges = total number of nodes - 1 = 9+N.

For ternary trees: Sum of degrees of all vertices = 2*total number of edges.
(1*3) + (2*4) + (3*3) + (4*2) + (N*1) = 2(9+N)
3 + 8 + 9 + 8 + N = 18 + 2N
N = 10.
 Question 72
Consider the graph shown below : Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
 A 17 B 14 C 16 D 13
Question 72 Explanation:
Applying Kruskal's Algorithm then Weight of Minimum Spanning Tree(MST) is 16 2+1+2+1+3+4+2+1=16
 Question 73
The second smallest of n elements can be found with _______ comparisons in worst case.
 A N + ceil(lg n) -2 B N-1 C Lg n D 3n/1
Question 73 Explanation:
Let's take an Example :
Input : 1 2 3 4 5 6 7 8
output : 2 (2 is the second smallest number) 1 is the smallest, it took n-1 = 7 comparisons.
Now compare all the numbers which 1 was compared to (compare 2,3,5). 2 is the second smallest. It took log(n)-1 = 2 comparisons.

Hence,
N -1 + log(n)-1
= N + log(n)-2
= N + ceil(log n) – 2 ∴ [ when n odd]
 Question 74
Consider two sequences X and Y :
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
 A 4 B 2 C 3 D 5
Question 74 Explanation:
If there is match then A[i, j] = A[i – 1, j  – 1] + 1
If there is no match then max(A[i – 1, j], A[i, j – 1])
longest Common Subsequence Table:
 X 0 1 2 1 3 0 1 Y 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 1 3 0 0 1 1 1 2 2 2 2 0 0 1 2 2 2 2 2 0 0 1 1 2 2 2 3 3 1 0 1 2 2 3 3 3 4 0 0 1 2 2 3 3 4 4
Therefore, length of longest common subsequence is 4
 Question 75
Consider the midpoint (or Bresenham) algorithm for rasterizing lines
given below :
(1) Input (x​ 1​ ,y​ 1​ ) and (x​ 2​ ,y​ 2​ )
(2) y=y​ 1
(3) d=f(x​ 1​ +1, y​ 1​ +1⁄2) // f is the implicit form of a line
(4) for x=x​ 1​ to x​ 2
(5) do
(6) plot(x,y)
(7) if(d<0)
(8) then
(9) y=y+1
(10) d=d+(y​ 1​ - y​ 2​ ) + (x​ 2​ - x​ 1​ )
(11) else
(12) d=d+(y​ 1​ - y​ 2​ )
(13) end
(14) end
Which statements are true ?
P: For a line with slope m>1, we should change the outer loop in line (4) to be over y.
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.
R: The algorithm fails if d is ever 0.
 A Q and R only B P only C P and Q only D P,Q and R
Question 75 Explanation:
P: For a line with slope m>1, we should change the outer loop in line (4) to be over y. True
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.True
R: The algorithm fails if d is ever 0. False beacuse The algorithm works only if d is ever 0
 Question 76
In PERT/CPM, the merge event represents___________ of two or more events.
 A Splitting B Completion C Beginning D Joining
Question 76 Explanation:
 Question 77
​The solution of recurrence relation : T(n)=2T(sqrt(n))+lg(n) is
 A O(lg (n) lg(n)) B O(lg (n)) C O(n lg (n)) D O(lg (n) lg(lg(n)))
Question 77 Explanation:
T(n)=2T(⎷n)+logn
T(n)=2T(n1/2)+logn
put n=2k
T(2k)=2T((2k)1/2)+k
T(2k)=2T(2k/2)+k
put T(2k)=S(k)
So S(k)=2S(k/2)+k
by Master Theorem
S(k)=O(k log k)
Hence T(n)=O(logn log logn)
 Question 78
The Knapsack problem belongs to which domain of problems?
 A Optimization B NP complete C Linear Solution D Sorting
Question 78 Explanation:
• 0/1 knapsack problem belongs to a class of "NP problems", which stands for “nondeterministic polynomial time."
• Fractional knapsack belongs to a class of "P class problem".
In general The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible

Refer : https://en.wikipedia.org/wiki/Knapsack_problem
 Question 79
The running time of quick sort algorithm depends heavily on the selection of:
 A No. of inputs B Arrangement of elements in an array C Size of elements D Pivot element
Question 79 Explanation:
The running time of the Quick Sort Algorithm depends on how the partition() method will split the input array
Since the partition() method depends on the pivot, we conclude that:
the value of the pivot will affect the performance of Quick Sort.

The Quick Sort algorithm performs the worst when the pivot is the largest or smallest value in the array
Reason:
When the pivot is the largest or smallest value in the array, the partition() step will do nothing to improve the ordering of the array elements

The Quick Sort algorithm performs the best when the pivot is the middle value in the array
Reason:
In the best case, the input array is divided into 2 arrays each containing approximately n/2 element
 Question 80
Two main measures for the efficiency of an algorithm are:
 A Processor and Memory B Complexity and Capacity C Time and Space D Data and Space
Question 80 Explanation:
The performance of an algorithm can be analyzed by two important factors. Those are:
1. Space complexity : The amount of memory space an algorithm needs to run to completion
2. Time complexity : The amount of computer time an algorithm needs to run to completion
 Question 81
What is the solution to the recurrence T(n)=T(n/2)+n?
 A O(log n) B O(n) C O(n logn) D None of these
Question 81 Explanation:
Masters theorem
T(n) = a T(n/b) + Θ(nk logp n)  where a > = 1,
b > 1, k > = 0 , and p is real number

Case 1: if a > bk, Then T(n) =Θ(n^(logba))

Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))

Case 3: a < bk,
a)  if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)

Given
T(n) = T(n/2) + n
Which is similar to
T(n) = 1*T(n/2) + n1 log0 n
a=1,b=2,k=1,p=0
Case 3 of master theorm a < bk i.e,. 1 < 21
p > =0, then
T(n) = Θ(nk logpn))
T(n) = Θ(n1 log0 n)
T(n) =Θ(n)
 Question 82
The concept of order Big O is important because:
 A It can be used to decide the best algorithm that solves a given problem B It is the lower bound of the growth rate of algorithm C It determines the maximum size of a problem that can be solved in a given amount of time D Both (A) and (B)
Question 82 Explanation:
• Big O notation is one of the most fundamental tools for computer scientists to analyze the cost of an algorithm.
• Big-O is the upper, Big-Omega is lower-bound, and Big-Theta is bound by both sizes,
 Question 83
0/1-Knapsack is a well known problem where, it is desired to get the maximum total profit by placing n items(each item is having some weight and associated profit) into a knapsack of capacity W. The table given below shows the weights and associated profits for 5 items, where one unit of each item is available to you. It is also given that the knapsack capacity W is 8. If the given 0/1 knapsack problem is solved using Dynamic Programming, which one of the following will be maximum earned profit by placing the items into the knapsack of capacity 8. A 19 B 18 C 17 D 20
Question 83 Explanation: Given Weight = 8
as per the algorithms we took item which have high profit/weigh ratio
item 1 have high profit ration therfore we took item1 profit = 3 remaning weight 8-1 = 7
next item 2 high profit ratio therfore we took item2 profit = 5 remaning weight 7-2 = 5

now we have two options item 3 or 5 because they have same profit/weigt ratio but item 5 weight is 8 but our remeaining capacity is only 5 that's why dont pick item 5

if we pick item 3 profit = 9 remaning weight 5-4 = 1 (1 is wasted)
so we pick item 4 therfore item4 profit = 11 remaning weight 5-5 = 0
so we pick item 1,2,4 =3+5+11 =19
total pofit =19
 Question 84
What is the running time of the following function(specified as a function of the input value)?
void Function(int n)
{
int i=1;
int s=1;
while(s<=n)
{
i++;
s=s+i;
}
}
 A O(n) B O(n​ 2​ ) C O(1) D O(​√n)
Question 84 Explanation:
's ' having the Relation si = si-1 + i.
"i value" increases by 1 in each iteration
The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates
if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
 Question 85
A machine needs a minimum of 100 sec to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately
 A 50.2 sec B 6.7 sec C 72.7 sec D 11.2 sec
Question 85 Explanation:
Quick sort algorithm takes n log(n) comparisons.
so 1000 x log(1000) = 9000 comparisons, which takes 100 sec.
To sort 100 names a minimum of 100 log(100)=600 comparisons are needed.
we get running time 100 x 600/9000 = 6.7 sec
 Question 86
Six files F1, F2, F3, F4, F5 and F6 have 100, 200, 50, 80, 120, 150 records respectively. In what order should they be stored so as to optimize act. Assume each file is accessed with the same frequency
 A F3, F4, F1, F5, F6, F2 B F2, F6, F5, F1, F4, F3 C F1, F2, F3, F4, F5, F6 D Ordering is immaterial as all files are accessed with the same frequency.
Question 86 Explanation:
• This question is based on the Optimal Storage on Tape problem
• Greedy approach is used to find the optimal time to retrieve them.
• The files are to be stored sequentially on tape.
• To read a particular file we need to start from beginning of tape..
• here goal is to find such order of storage that cost to access file is minimum..in order to achieve the files are stored in increasing order of length
So in above eg files will be stored in order F3 F4 F1 F5 F6 F2
 Question 87
The time complexity of the following C function is (assume n > 0)
int recursive (int n)
{
if(n == 1)
return (1);
else
return (recursive (n-1) + recursive (n-1));
}
 A O(n) B O(n log n) C O(n2) D O(2n)
Question 87 Explanation:  Question 88
The number of spanning tree for a complete graph with 7 vertices are:
 A 25 B 75 C 35 D 22×5
Question 88 Explanation:
If a graph is a complete graph with n vertices,
Then Total number of spanning trees is n(n-2)
where n is the number of nodes in the graph.
• So No of spanning tree possible in complete graph with 7 node=75
 Question 89
If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
 A 8, 9, 15, 20, 47, 4, 12, 17, 30, 40 B 8, 15, 20, 47, 4, 9, 30, 40, 12, 17 C 15, 20, 47, 4, 8, 9, 12, 30, 40, 17 D 4, 8, 9, 15, 20, 47, 12, 17, 30, 40
Question 89 Explanation: Question 90
What is the total number of spanning trees of a complete graph of 4 vertices (K4)?
 A 16 B 8 C 4 D 15
Question 90 Explanation:
If a graph is a complete graph with n vertices, then total number of spanning trees is n(n-2) where n is the number of nodes in the graph. In complete graph,
• So No of spanning tree possible in complete graph with 4 node=42  =16
 Question 91
Which of the following data structures is most suitable for radix sort ?
 A Stack B Binary search tree C Tree D Linked list
Question 91 Explanation:
Radix sort isn't a comparison-based sort, which means it can attain O(n) performance. However if you're sorting a flat array, it has to do a lot of data shuffling. This is one of the few times where a linked-list actually has a performance advantage, as it can shuffle the lists around with only a single re-link operation.
 Question 92
Which of the following greedy strategies gives the optimal solution for the following 0-1 knapsack problem?
w=30, w1=10, p1=11, w2=15, p2=12, w3=20, p3=24, w4=25, p4=25
 A Largest profit per unit weight first B Largest profit first C Lightest item first D Heaviest item first
 Question 93
Which of the following is the time complexity to find the determinant of an upper triangular matrix of order n*n?
 A Θ(n2.5) B Θ(n) C Θ(n2) D Θ(n)
Question 93 Explanation:
The determinant is multiplication of diagonal element. Therefore time complexity for determinant is o(n) and for inverse is o(n*n).
 Question 94
Consider an array with n elements. Which of the following options gives the maximum number of swap(aj, aj+1) operations to sort the array elements using efficient bubble sort algorithm?
 A N2 /2 B N(n-1)/2 C N2 D N(n+1)/2
Question 94 Explanation:

## Complexity Analysis of Bubble Sort

In Bubble Sort, `n-1` comparisons will be done in the 1st pass, `n-2` in 2nd pass, `n-3` in 3rd pass and so on. So the total number of comparisons will be,

(n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1
Sum = n(n-1)/2
i.e O(n2)
Hence the time complexity of Bubble Sort is O(n2).

Also, the best case time complexity will be O(n), it is when the list is already sorted.

Following are the Time and Space complexity for the Bubble Sort algorithm.
• Worst Case Time Complexity [ Big-O ]: O(n2)
• Best Case Time Complexity [Big-omega]: O(n)
• Average Time Complexity [Big-theta]: O(n2)
• Space Complexity: O(1)
 Question 95
The running time of an algorithm T(n),where 'n' is the input size, is given by
T(n)=8T(n/2)+qn, if n>1
=p,                if n=1
Where p,q are constants. The order of this algorithm is
 A N2 B Nn C N3 D N
Question 95 Explanation:
Masters theorem
T(n) = a T(n/b) + Θ(nk logp n)  where a > = 1,
b > 1, k > = 0 , and p is real number

Case 1: if a > bk, Then T(n) =Θ(n^(logba))

Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))

Case 3: a < bk,
a)  if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)

Given,
T(n) = 8T(n/2) + qn
In this problem we use Master’s Theorem:
T(n)=aT(n/b)+θ(nklogpn)
Which is similar to
T(n) = 8T(n/2) + qn
where a=8,b=2,k=1,p=0, and a>bk
Hence
θ(nlogba) → θ(nlog28)=θ(n3)
 Question 96
The question is based on the following program fragment.
```f(int Y, int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]<x)
i=k;
else
j=k;
}
while((Y[k]!=x) && (i<j));
if(Y[k] ==x)
printf("x is in the array");
else
printf("x is not in the array");
}```
On which of the following contents of 'Y' and 'x' does the program fail ?
 A Y is [1 2 3 4 5 6 7 8 9 10] and x<10 B Y is [1 3 5 7 9 11 13 15 17 19] and x<1 C Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2 D Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
Question 96 Explanation:
For option(A) with x=9, k will take the following values:
• 4
• 6
• 7
• 8−y=9, x found
For option(D), with x=10, k will take the following values:
• 4, y=10, x found
In option(C) the control will continue to iterate as i=8 and j=9; again and again "i" will be assigned k which itself equals 8 as [ (8+9)/2 ] being stored in an integer type variable, will evaluate to 8.

option(D) printing mistake : Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
 Question 97
An algorithm is made up of two modules M1 and M2. If order of M1 is f(n) and M2 is g(n) then the order of algorithm is
 A Max(f(n),g(n)) B Min(f(n),g(n)) C F(n)+g(n) D F(n) * g(n)
Question 97 Explanation:
by definition of order, there exists a constant C1, Ec2, n1
N2 such that
T(n)  ≤ c1 x f(n), for all n ≥n1
T(n) ≤ c2 x f(n), for all n≥n2
N= max (n1 n2) and c= max (c1, c2 ). So,
T(n) ≤ cx f(n) for all n ≥N
T(n) ≤ c x g(n) for all n≥N
T(n) ≤ c/2 x (f (n) + g(n))
Without loss of generality let max (f(n), g(n)= f(n) )
So, T(n) ≤ c/2 (f(n) + f(n) ≤ cx f(n)
So, order is f(n), which is max (f(n), g(n), by assumption

Explanation 2 :
3 Cases are possible hear
Case 1 : f(n) < g(n)
g(n) is highr order term, f(n) is lower order term, so we ignore f(n) and we consider only g(n)
Case 2 :f(n) > g(n)
g(n) is lower order term, f(n) is higher order term, so we ignore g(n) and we consider only f(n)
Case 3 : f(n) == g(n)
We consider either O(g(n)) or O(f(n)) which is equal asymptotically.
Note :
O( f(n) * g(n) ) = O( max( f(n), g(n) ) )
O( max( f(n), g(n) ) ) != max(f(n), g(n))
 Question 98
Consider a complete binary tree where the left and the right subtrees of the root are max heaps. The lower bound for the number of operations to convert the tree to a heap is:
 A Ω(logn) B Ω(nlogn) C Ω(n) D Ω(n2)
Algorithms Nielit Scientific IT 2017
Question 98 Explanation:
Since left and right subtree is already a heap. So we can apply Heapify (node) which take log n time.
 Question 99
Which type of algorithm is used to solve the "8 Queens" problem?
 A Greedy B Dynamic C Divide and conquer D Backtracking
Algorithms Nielit Scientist-B DEC 2017
Question 99 Explanation:
Backtracking algorithm is used to solve the 8 Queens problem.
1. Start in the leftmost column
2.  If all queens are placed return true
3. Try all rows in the current column. Do following for every tried row.
(a) If the queen can be placed safely in this row then mark this [row, column] as part of the solution and recursively check if placing queen here leads to a solution.
(b) If placing the queen in [row, column] leads to a solution then return true.
(c) If placing queen doesn't lead to a solution then umark this [row, column] (Backtrack) and go to step (a) to try other rows.
4.  If all rows have been tried and nothing worked, return false to trigger backtracking.
 Question 100
Merge sort uses:
 A Divide and conquer B Backtracking C Heuristic approach D Greedy approach
Algorithms     GATE CS 1995
Question 100 Explanation:
Merge sort is a divide-and-conquer algorithm based on the idea of breaking down a list into several sub-lists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list.

Idea:
• Divide the unsorted list into  N sublists, each containing 1 element.
• Take adjacent pairs of two singleton lists and merge them to form a list of 2 elements.  N will now convert into N/2  lists of size 2.
• Repeat the process till a single sorted list of obtained.
While comparing two sublists for merging, the first element of both lists is taken into consideration. While sorting in ascending order, the element that is of a lesser value becomes a new element of the sorted list. This procedure is repeated until both the smaller sublists are empty and the new combined sublist comprises all the elements of both the sublists.

Reference : https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/tutorial/#:~:text=Merge%20sort%20is%20a%20divide,into%20sublists%2C%20each%20containing%20element.
 Question 101
Given two sorted list of size 'm' and 'n' respectively. The number of comparisons needed in the worst case by the merge sort algorithm will be:
 A M*n B Minimum of m,n C Maximum of m,n D M+n-1
Algorithms      ISRO 2018
Question 101 Explanation:
The normal merge sort algorithm - merge step with normally apply n + m -1 comparisons, where one list is of size n and and the other list is of size m. Using this algorithm is the most simplest approach to combine two sorted lists

Assuming m < n, at least m comparisons and at most n+m-1 comparisons (worst case). So assuming all elements of the smallest list come first, the minimum number of comparisons is min (n, m). Assuming that by simplest you mean best case, then Option(B) is the correct answer. Option(D) is correct for the worst case
 Question 102
Which of the following algorithms use recursion for sorting an array of integers?
 A Bubble sort and Insertion sort B Bubble sort and Quicksort C Bubble sort and merge sort D Quicksort and merge sort
Question 102 Explanation:
Recursive techniques can be utilized in sorting algorithms, allowing for the sorting of n elements in O(nlogn) time (compared with the O(n2) efficiency of bubble sort. Two such algorithms which will be examined here are Mergesort and Quicksort
 Question 103
Consider the following C program segment:
int IsPrime (n)
{
int i, n;
for (i=2; i<=sqrt(n);i++)
if(n%i == 0)
{ printf("Not Prime \n"); return 0; }
return 1;
}
Let T(n) denotes the number of times the for loop is executed by the program on input n.
Which of the following is TRUE?
 A T(n) = O(​√n) and T(n) = Ω​ (√n) B T(n) = O(​√n) and T(n) = ​ Ω ​(1) C T(n) = O(n) and T(n) = ​ Ω ​ (√n) D None of the above
Question 103 Explanation:
Worst Case cannot be more than O(√n) and Best Case will be Ω(1)
(1)when loop terminates with the condition n%i =0 for the first time
(2)for loop in the question run maximum √n time and minimum 1 time
therefore,
T(n) = O(​ √n) and T(n) = ​ Ω(1)
 Question 104
Which of the following algorithm solve the all pair shortest path problem?
A) Dijkstra's algorithm
B) Floyd's algorithm
C) prim's algorithm
D) Warshall's algorithm
 A Both A and B B Both A and C C Both C and D D Both B and D
Question 104 Explanation:
Prim’s Algorithm : Prim’s Algorithm is used to the Minimum Spanning Tree (MST) of a given graph. To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.
Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from source to all the other nodes in a weighted graph.
BellmanFord’s Algorithm :BellmanFord’s Algorithm is used to find the shortest distance from source to all other nodes in a graph that can contain negetive weight edges.

Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.

Robert Floyd & Stephen Warshall two scientist :-from there name a single dynamic algorithm Floyd-Warshall
Note:-they are not 2 different algorithms.. So, they should not given in two options

Reference : https://en.m.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
 Question 105
An algorithm is made up of two modules M1 and M2. If order of M1 is f(n) and M2 is g(n) then the order of algorithm is
 A Max(f(n),g(n)) B Min(f(n),g(n)) C F(n)+g(n) D F(n) * g(n)
Question 105 Explanation:
by definition of order, there exists a constant C1, Ec2, n1
N2 such that
T(n)  ≤ c1 x f(n), for all n ≥n1
T(n) ≤ c2 x f(n), for all n≥n2
N= max (n1 n2) and c= max (c1, c2 ). So,
T(n) ≤ cx f(n) for all n ≥N
T(n) ≤ c x g(n) for all n≥N
T(n) ≤ c/2 x (f (n) + g(n))
Without loss of generality let max (f(n), g(n)= f(n) )
So, T(n) ≤ c/2 (f(n) + f(n) ≤ cx f(n)
So, order is f(n), which is max (f(n), g(n), by assumption

Explanation 2 :
3 Cases are possible hear
Case 1 : f(n) < g(n)
g(n) is highr order term, f(n) is lower order term, so we ignore f(n) and we consider only g(n)
Case 2 :f(n) > g(n)
g(n) is lower order term, f(n) is higher order term, so we ignore g(n) and we consider only f(n)
Case 3 : f(n) == g(n)
We consider either O(g(n)) or O(f(n)) which is equal asymptotically.
Note :
O( f(n) * g(n) ) = O( max( f(n), g(n) ) )
O( max( f(n), g(n) ) ) != max(f(n), g(n))
 Question 106
The running time of an algorithm T(n),where 'n' is the input size, is given by
T(n)=8T(n/2)+qn, if n>1
=p,                if n=1
Where p,q are constants. The order of this algorithm is
 A N​ 2 B N​ n C N​ 3 D N
Question 106 Explanation:
Masters theorem
T(n) = a T(n/b) + Θ(nk logp n)  where a > = 1,
b > 1, k > = 0 , and p is real number

Case 1: if a > bk, Then T(n) =Θ(n^(logba))

Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))

Case 3: a < bk,
a)  if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)

Given,
T(n) = 8T(n/2) + qn
In this problem we use Master’s Theorem:
T(n)=aT(n/b)+θ(nklogpn)
Which is similar to
T(n) = 8T(n/2) + qn
where a=8,b=2,k=1,p=0, and a>bk
Hence
θ(nlogba) → θ(nlog28)=θ(n3)
 Question 107
The question is based on the following program fragment.
```f(int Y, int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]<x)
i=k;
else
j=k;
}
while((Y[k]!=x) && (i<j));
if(Y[k] ==x)
printf("x is in the array");
else
printf("x is not in the array");
}```
On which of the following contents of 'Y' and 'x' does the program fail ?
 A Y is [1 2 3 4 5 6 7 8 9 10] and x<10 B Y is [1 3 5 7 9 11 13 15 17 19] and x<1 C Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2 D Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
Question 107 Explanation:
For option(A) with x=9, k will take the following values:
• 4
• 6
• 7
• 8−y=9, x found
For option(D), with x=10, k will take the following values:
• 4, y=10, x found
In option(C) the control will continue to iterate as i=8 and j=9; again and again "i" will be assigned k which itself equals 8 as [ (8+9)/2 ] being stored in an integer type variable, will evaluate to 8.

option(D) printing mistake : Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
 Question 108
Which of the following search algorithm requires a sorted array?
 A Linear search B Hash search C Binary search D All of these
Question 108 Explanation:
-In Binary Search the array need to be sorted to perform binary search .The sorted order of the data will help us to skip one half of the array at each iteration until we get our desired result and thus the complexity would be O(logn).If the data is not sorted then the desired operation can’t be performed in logn time complexity.

-Actually the binary search assumes that middle value contains the median of array. If it is not sorted, this assumption does not make sense, since the median can be anywhere and cutting the array in half could mean that you cut off the number you were searching for.
 Question 109
Sorting is
 A A process of rearranging a given set of objects in a specific order B To facilitate the later search for members of the sorted set C Is a relevant and essential activity, particularly in data processing D All of these
Question 109 Explanation:
In computer science, arranging in an ordered sequence is called "sorting". Sorting is a common operation in many applications, and efficient algorithms to perform it have been developed.
The most common uses of sorted sequences are:
• making lookup or search efficient;
• making merging of sequences efficient.
• enable processing of data in a defined order.
Reference : https://en.wikipedia.org/wiki/Sorting
 Question 110
Which of the following sorting algorithms uses recursion?
 A Insertion sort B Heap sort C Merge sort D Bubble sort
Question 110 Explanation:
Recursive techniques can be utilized in sorting algorithms, allowing for the sorting of n elements in O(nlogn) time (compared with the O(n2) efficiency of bubble sort. Two such algorithms which will be examined here are Mergesort and Quicksort
 Question 111
The complexity of linear search algorithm is
 A O(nlogn) B O(n) C O(logn) D O(n*n)
Question 111 Explanation:
• Linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.
• A linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list. If each element is equally likely to be searched, then linear search has an average case of n+1/2 comparisons
• The worst case complexity is O(n), sometimes known an O(n) search
 Question 112
The maximum number of comparisons for a particular record among 32 sorted recorder through binary search method will be
 A 2 B 10 C 8 D 5
Question 112 Explanation:
Use This formula for Binary Search = log2n for n elements
Given n = 32
=log232 = 5
∴ maximum number of comparisons = 5
 Question 113
For the recurrence relation
T(n) = 2 + T(n - 1), where T(0)=2, T(n)=?
 A N2 B 2n+2 C Log(n) D 2n
Question 113 Explanation:
Contribute you solution in comment section of below link
 Question 114
Travelling salesperson problem belongs to which category of problems?
 A Satisfiable B Non Solvable C Decision D Optimization
Question 114 Explanation:
The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
Reference : https://en.wikipedia.org/wiki/Travelling_salesman_problem#:~:text=The%20travelling%20purchaser%20problem%20and,class%20of%20NP%2Dcomplete%20problems.
 Question 115
Given the array: 12,40,3,2 and intermediate states of the array while sorting
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
 A Insertion sort B Selection sort C Quick sort D Bubble sort
Question 115 Explanation:
1. Starting from the first index, compare the first and the second elements.If the first element is greater than the second element, they are swapped. Now, compare the second and the third elements. Swap them if they are not in order. The above process goes on until the last element.
2. The same process goes on for the remaining iterations. After each iteration, the largest element among the unsorted elements is placed at the end. In each iteration, the comparison takes place up to the last unsorted element. The array is sorted when all the unsorted elements are placed at their correct positions.
bubbleSort(array)
for i < -1 to indexOfLastUnsortedElement-1
if leftElement > rightElement
swap leftElement and rightElement
end bubbleSort
 Question 116
A and B are two sorted lists of integers such that A[n]<B. An element is to be searched in both lists. Maximum number of comparisons using binary search is
 A 2*log2 n B 2*log2 n+1 C Log2 2n D 2* Log2 n+1
Question 116 Explanation:
For in each step the array is being cut into two parts, each of the size (n-1)/2, we have a maximum of log_2(n-1) steps.
This leads to a total of 2*log_2(n-1) comparisons, which asymptotically indeed equals to O(log(n))
 Question 117
Consider an array representation of an n element binary heap where the elements are stored from index 1 to index n of the array. For the element stored at index i of the array (i<=n), the index of the parent is:
 A Floor ((i+1)/2) B Ceiling ((i+1)/2) C Floor (i/2) D Ceiling (i/2)
Question 117 Explanation:
Binary heap of n elements in which elements are stored in array from index 1 to n
For the binary heap element stored at index i of the array,
-Parent Node will be at index: floor(i/2)
-Left Child will be at index: 2i
-Right child will be at index: 2*i + 1
 Question 118
The following numbers are inserted into an empty binary search tree in the given order: 10, 1, 3, 5, 15, 12, 16 What is the height of the binary search tree ?
 A 3 B 4 C 5 D 6
Question 118 Explanation: Question 119
Let G be an undirected connected graph with distinct edge weight. Let Emax be the edge with maximum weight and E​ min​ the edge with minimum weight. Which of the following statements is false?
 A Every minimum spanning tree of G must contain E​ min. B If E​ max​ is in minimum spanning tree, then its removal must disconnect G. C No minimum spanning tree contains E​ max. D G has a unique minimum spanning tree.
Question 119 Explanation:
Option(A) : True
Every minimum spanning tree of G must contain Emin always TRUE.
In Kruskal's algorithm always picks the edges in ascending order of their weights while constructing a minimum spanning tree of G.Option(A) is always True because first edge could never create a cycle.

Option(B) : True
If Emax is in a minimum spanning tree, then its removal must disconnect G "not always False".
emax would be included in MST "if and only if , emax is a bridge between two connected components" , removal of which will surely disconnect the graph. Option(C) : False
No minimum spanning tree contains emax
should be written as "may or may not"
Refer Case 1 in option (B)

Option(D) : True
G has a unique minimum spanning tree
G has unique edge weights , so MST will be unique . In case if edge weights were repeating , there could've been a possibility of non-unique MSTs.
 Question 120
A list of n strings, each of length n, is sorted into lexicographic order using merge – sort algorithm. The worst case running time of this computation is :
 A O(n log n) B O(n​ 2​ log n) C O(n​ 2​ + log n) D O(n​ 3​ )
Question 120 Explanation:
-The recurrence tree for merge sort will have height Log(n). And O(n) work will be done at each level
-Each level involves n comparisons
-Single comparison takes O(n) time in worst case.
-So time complexity of this Merge Sort will be O (n2 log n) .
 Question 121
Postorder traversal of a given binary search tree T produces following sequence of keys:3, 5, 7, 9, 4, 17, 16, 20, 18, 15, 14 Which one of the following sequences of keys can be the result of an in-order traversal of the tree T?
 A 3, 4, 5, 7, 9, 14, 20, 18, 17, 16, 15 B 20, 18, 17, 16, 15, 14, 3, 4, 5, 7, 9 C 20, 18, 17, 16, 15, 14, 9, 7, 5, 4, 3 D 3, 4, 5, 7, 9, 14, 15, 16, 17, 18, 20
Question 121 Explanation:
* A binary search tree can be constructed using only preorder or only postorder traversal result.
* This is because inorder traversal can be obtained by sorting the given result in increasing order.
* Inorder traversal of a BST always gives elements in increasing order. Among all four options(D) is the only increasing order sequence.
 Question 122
Let us assume that you construct ordered tree to represent the compound ​ proposition
(~ (p ∧ q)) ↔ (~ p ∨ ~ q)
Then, the prefix expression and post-fix​ expression determined using this ordered tree are given as ____ and _____ respectively.
 A ↔ ~ ∧pq∨ ~ ~ pq, pq∧~p~q~∨↔ B ↔ ~ ∧pq∨ ~ p ~ q, pq∧~p~q~∨↔ C ↔ ~ ∧pq∨ ~ ~ pq, pq∧~p~~q∨↔ D ↔ ~ ∧pq∨ ~ p~ q, pq∧~p~ ~q∨↔
Question 122 Explanation:
Given (~(p ∧ q)) ↔ (~ p ∨ ~ q)
root = ↔
Left subtree = (~(p ∧ q))
Right subtree = (~ p ∨ ~ q) Prefix exp: ↔∼∧pq∨∼p∼q
Postfix exp: pq∧∼p∼q∼∨↔
 Question 123
Given i= 0, j = 1, k = –1, x = 0.5, y = 0.0
What is the output of given ‘C’ expression ? x * 3 && 3 || j | k
 A -1 B 0 C 1 D 2
Question 123 Explanation:
According to the  "C Precedence table"
x * 3 && 3 || j | k
(x*3)&&3 || j | k
0&&3 || (j | k)
0&&3 || 1
(0&&3) || 1
0||1
1
 Question 124
The runtime for traversing all the nodes of a binary search tree with n nodes and printing them in an order is:
 A O(lg n) B O(n lg n) C O(n) D O(n​ 2​ )
Question 124 Explanation:
* Inorder traversal of BST gives the sorted order
* Inorder traversal takes O(n) time complexity in worst case
 Question 125
In general, in a recursive and non-recursive implementation of a problem (program) :
 A Both time and space complexities are better in recursive than in non-recursive program. B Both time and space complexities are better in non-recursive than in recursive program C Time complexity is better in recursive version but space complexity is better in non-recursive version of the program. D Space complexity is better in recursive version but time complexity is better in non-recursive version of the program.
Question 125 Explanation:
In general, in a recursive and non-recursive implementation of a program both time and space complexities are better in non-recursive than in recursive program
Recursion does not save spaces or time it is just the simplest and efficient method to solve problem as somewhere sub routines are stored
Recursion may not offer any better space or time complexity it only offers compact and simple to write and understand than non-recusion implementation
 Question 126
In the following graph, discovery time stamps and finishing time stamps of Depth First Search (DFS) are shown as x/y, where x is discovery time stamp and y is finishing time stamp. It shows which of the following depth first forest?
 A {a, b, e} {c, d, f, g, h} B {a, b, e} {c, d, h} {f, g} C {a, b, e} {f, g} {c, d} {h} D {a, b, c, d} {e, f, g} {h}
Question 126 Explanation: Check for the Cycles in the graph, you will find that node A,B,E form one cycle. node F,G form another cycle. Node C,D form another cycle. and H is having self loop
{a,b,e} {f,g} {c,d} {h}
 Question 127
An ideal sort is an in-place-sort whose additional space requirement is __________.
 A O (log​ 2​ n) B O (n log​ 2​ n) C O (1) D O (n)
Question 127 Explanation:
In particular, some sorting algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items being sorted;
 Question 128
The average case occurs in the linear search algorithm when:
 A The item to be searched is in somewhere middle of the array B The item to be searched is not in the array C The item to be searched is in the last of the array D The item to be searched is either in the last or not in the array
Question 128 Explanation:
- Average case occurs in the Linear Search Algorithm when The item to be searched is in some where middle of the Array
- Best case occurs in the Linear Search Algorithm when the item to be searched is in starting of the Array.
- Worst case occurs in the Linear Search Algorithm when the item to be searched is in end of the Array.
 Question 129
To determine the efficiency of an algorithm the time factor is measured by:
 A Counting microseconds B Counting number of key operations C Counting number of statements D Counting kilobytes of algorithm
Question 129 Explanation:
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main factors, which decide the efficiency of X.
• Time Factor − Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
• Space Factor − Space is measured by counting the maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms of n as the size of input data.
 Question 130
Which of the following algorithms sort n integers, having the range 0 to (n​ 2​ - 1), in ascending order in O(n) time ?
 A Selection sort B Bubble sort C Radix sort D Insertion sort
Question 130 Explanation:
• Selection sort takes O(n2) time.
• Bubble sort takes O(n2) time.
• Radix sort takes O(n) time.
• Insertion sort takes O(n2) time
 Question 131
You have to sort a list L, consisting of a sorted list followed by a few ‘random’ elements. Which of the following sorting method would be most suitable for such a task ?
 A Bubble sort B Selection sort C Quick sort D Insertion sort
Question 131 Explanation:
Bubble sort: sorted array as input is best case so it will take O(n2) Time Complexity
Selection sort: sorted array as input is best case so it will take O(n2) Time Complexity
Quick sort: sorted array as input is worst case so it will take O(n2) Time Complexity
Insertion sort: sorted array as input is best case for this sort. Best case Complexity is O ( n )
 Question 132
Big-O estimates for the factorial function and the logarithm of the factorial function i.e. n! and log n! is given by
 A O(n!) and O(n log n) B O(nn) and O(n log n) C O(n!) and O(log n!) D O(nn) and O(log n!)
Question 132 Explanation:
Note: n! we can also write it as nn
n!=1*2*3*4*5*6...*n-1*n <= n*n*n*n..........*n=(n)n =O((n)n)
Given log(n!) = log(n)n
=n log n
=O(n log n)
 Question 133

The solution of the recurrence relation T(m) = T(3m/4) + 1 is :

 A Θ(lg m) B Θ(m) C Θ(mlg m) D Θ(lglg m)
Question 133 Explanation:
Masters theorem
T(n) = a T(n/b) + Θ(nk logp n)  where a > = 1,
b > 1, k > = 0 , and p is real number

Case 1: if a > bk, Then T(n) =Θ(n^(logba))

Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))

Case 3: a < bk,
a)  if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)

Given
T(m) = T(3m/4) + 1
T(m) = T(m/(4/3)) + 1

In this problem we use Master’s Theorem:
T(m) = aT(m/b) + nk logp n
which is similar to
T(m) = 1 * T(m/(4/3)) + 1 * n0 log0 n
where a = 1, b = 4/3, k = 0 and p = 0

By using case 2 of Master's Theorem
a = 1 = b0 = bk
∴ a=bk

p > -1 means apply case 2(a) of Master's Theorem

∴ T(m) = θ( m(logba)log(p+1)m)
where  a=1, b=4/3, k=0 and p=0
∴ T(m) = θ( m(log4/31)log(0+1)m)    (∴  log4/31 = 0 )
∴ T(m) = θ( m0log1m)
∴ T(m) = θ(log m)
 Question 134
Which of the following algorithms solves the single-source shortest paths ?
 A Prim’s algorithm B Floyd - Warshall algorithm C Johnson’s algorithm D Dijkstra’s algorithm
Question 134 Explanation:
Prim’s Algorithm : Prim’s Algorithm is used to the Minimum Spanning Tree (MST) of a given graph. To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.

Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.

Johnson’s algorithm  : Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted, directed graph.
It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist.
It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph
Time complexity of Floyd Warshall Algorithm is Θ(V3). Using Johnson’s algorithm, we can find all pair shortest paths in O(V2log V + VE) time.
Johnson’s algorithm uses both Dijkstra and Bellman-Ford as subroutines.

Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from a source to all the other nodes in a weighted graph. that why name came into picture single source shortest path.
 Question 135
A text is made up of the characters A, B, C, D, E each occurring with the probability
0.08, 0.40, 0.25, 0.15 and 0.12
respectively.
The optimal coding technique will have the average length of :
 A 2.4 B 1.87 C 3 D 2.15
Question 135 Explanation:
Given Probability  :
A → 0.08
B → 0.40
C → 0.25
D → 0.15
E → 0.12 Now Draw a Huffman tree: length for each character = no of bits * frequency of occurrence:
Length of A = 1110 = 4 * 0.08 = 0.32
Length of B = 0 = 1 * 0.4 = 0.4
Length of C = 10 = 2 * 0.25 = 0.5
Length of D = 110 = 3 * 0.15 = 0.45
Length of E = 1111 = 4 * 0.12 = 0.48

Now add these length to find Average length:
Average length = 0.32 + 0.4 + 0.5 + 0.45 + 0.48 = 2.15
 Question 136
 A (a)-(iv),(b)-(i), (c)-(iii), (d)-(ii) B (a)-(iv),(b)-(iii), (c)-(i), (d)-(ii) C (a)-(iii),(b)-(iv), (c)-(ii), (d)-(i) D (a)-(iv),(b)-(iii), (c)-(ii), (d)-(i)
Question 136 Explanation:
1. 8-Queens‏‏ Problem is ‎Backtracking.
2. Single-Source shortest‏‏‎ Paths‏‏‎ is Greedy‏‏‎ approach.‏‏‎
3. STRASSEN’s Matrix‏‏‎ Multiplication‏‏‎ is ‏‏‎ Divide‏‏‎ and‏‏‎ conquer‏‏‎ technique.‏‏‎
4. Optimal Binary‏‏‎ Search‏‏‎ trees‏‏‎ is Dynamic‏‏‎ programming.
 Question 137
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
 A 45 B 72 C 360 D 450
Question 137 Explanation:
• An octal number is a type of number that is composed of 8 possible values (0 to 7).
• But in this Questions They are saying 5 Digit octal Number possible values (0 to 4) it means Number of Digits is 5
Maximum comparisons = (Digits) x (Type of Number) x (Number of values)
where
Digits =5,
Type of Number(Base Value) =8 ,
Number of values=9

Therefore Maximum Number of comparisons are = 5* 8 * 9 = 360
 Question 138
 A Bubble B Shake C Tree D Insertion
Question 138 Explanation: Question 139
What is the time complexity for the following C module ? Assume that n>0 .
```int module(int n)
{
if (n == 1)
return 1;
else
return (n + module(n-1));
}```
 A O(n) B O(log n) C O(n2) D O(n!)
Question 139 Explanation:
We need to consider
`n+module(n-1)`
Where,
module(n-1) is Recursive call
n + module(n-1) is a constant addition.

Recurrence Relation = T(n)= T(n−1)+c
= T(n) = T(n−1)+c
=T(n−2)+c+c=T(n−2)+2c
=T(n−k)+k×c
∴  n−k=1 ⇒ k=n−1
=T(n−n+1)+(n−1)∗c
=T(1)+(n−1)∗c
=O(n)
 Question 140
Consider the following terminology and match List 1 and List 2 and choose the correct answer from the code given below
b= branch factor
d= depth of shallowest solution
M= Maximum depth of the search tree
I= depth limit Code:
 A (a)-(iii), (b)-(ii), (c)-(iv), (d)-(i) B (a)-(ii), (b)-(iii), (c)-(iv), (d)-(i) C (a)-(i), (b)-(ii), (c)-(iv), (d)-(iii) D (a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
Question 140 Explanation:
Breadth-first search goes through the tree level by level, visiting all of the nodes on the top level first, then all the nodes on the second level, and so on. This strategy has the benefit of being complete (if there's a solution, it will be found), and optimal as long as the shallowest solution is the best solution. However, the way breadth-first search achieves this is by keeping all of the leaf nodes in memory, which requires a prohibitive amount of memory when searching anything more than a very small tree.
Time complexity : Time complexity of breadth-first search is O(bd) where b is the branching factor  and d is the depth of the solution.
Space Complexity : Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).

Depth-First Search
Depth-first search goes through the tree branch by branch, going all the way down to the leaf nodes at the bottom of the tree before trying the next branch over. This strategy requires much less memory than breadth-first search, since it only needs to store a single path from the root of the tree down to the leaf node. However, it is potentially incomplete, since it will keep going on down one branch until it finds a dead-end, and it is nonoptimal -- if there's a solution at the fourth level in the first branch tried, and a solution at the second level in the next one over, the solution at the fourth level will be returned.
Time complexity : Time complexity of depth-first search is O(bm) where b is the branching factor (2 for the binary trees below) and m is the maximum depth of the tree.
Space complexity : Space complexity is given by b*m.
Try searching for "a" and see which of the two solutions is found when using depth-first search.

Depth-Limited Search
Depth-limited search essentially does a depth-first search with a cutoff at a specified depth limit. When the search hits a node at that depth, it stops going down that branch and moves over to the next one. This avoids the potential problem with depth-first search of going down one branch indefinitely. However, depth-limited search is incomplete → if there is a solution, but only at a level deeper than the limit, it will not be found. It is also nonoptimal in the same way as depth-first search is.
Time complexity : Time complexity of depth-first search is O(bl) where b is the branching factor (2 for the binary trees below) and l is the depth limit.
Space complexity : Space complexity is given by  b*l.
The depth limit in this instance of the algorithm is 2. (The depth of the root node is 0.)

Iterative Deepening Search
Iterative deepening does repeated depth-limited searches, starting with a limit of zero and incrementing once each time. As a result, it has the space-saving benefits of depth-first search, but is also complete and optimal, since it will visit all the nodes on the same level first before continuing on to the next level in the next round when the depth is incremented.
Time complexity : Time complexity of iterative deepening search is O(bd) where b is the branching factor (2 for the binary trees below) and d is the depth of the solution.
Space complexity : Space complexity is given by O(bd).
 Question 141
The time required to search an element in a linked list of length n is
 A O(log n) B O(n) C O(1) D (n2)
Question 141 Explanation:
→In the worst case, the element to be searched has to be compared with all elements of linked list.
→The time required to search an element in a linked list of length n is O(n).
→Linked lists hold two main pieces of information (the value and pointer) per node. This means that the amount of data stored increases linearly with the number of nodes in the list.
→ Therefore, the space complexity of the linked list is linear is O(n).
 Question 142
Which of the following method is used for repetitive in which each action is started in terms of a previous result:
 A Recursion B Iteration C Looping D Structures
Question 142 Explanation:
Recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science
 Question 143
 A Knight's tour B Eight Queens problem C Kings problem D Rooks problem
Question 143 Explanation:
• The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal.
• The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n = 2 and n = 3
Refer : https://en.wikipedia.org/wiki/Eight_queens_puzzle#:~:text=The%20eight%20queens%20puzzle%20is,row%2C%20column%2C%20or%20diagonal.
 Question 144
 A Knight's tour B Eight Queens problem C Kings problem D Rooks problem
Question 144 Explanation:
• The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal.
• The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n = 2 and n = 3
Refer : https://en.wikipedia.org/wiki/Eight_queens_puzzle#:~:text=The%20eight%20queens%20puzzle%20is,row%2C%20column%2C%20or%20diagonal.
 Question 145
Consider a Boolean function of ‘n’ variables. The order of an algorithm that determines whether the Boolean function produces a output 1 is :
 A Logarithmic B Linear C Quadratic D Exponential
Question 145 Explanation:
• For "n" boolean variables  there are 2n rows in truth table.
• For determines whether the boolean function produces a output 1 for the given function, in worst case it needs to check every possible row → O(2n) which is Exponential
 Question 146
Which of the following connected simple graph has exactly one spanning tree ?
 A Complete graph B Hamiltonian graph C Euler graph D None of the above
Question 146 Explanation:
All the given graph in the options are cyclic graph.
When a graph has cycle, then it may or may not have the unique spanning tree.
 Question 147
How much extra space is used by heapsort ?
 A O(1) B O(Log n) C O(n) D O(n​ 2​ )
Question 147 Explanation:
HEAP SORT build using MAX_HEAPIFY function which calls itself but it can be implemented using a simple while loop and thus making it an iterative function which intern takes no space. So, the Space Complexity of HEAP SORT can be reduced to O(1).
 Question 148
Suppose that the splits at every level of Quicksort are in proportion 1-β to β, where 0 < β ≤ 0.5 is a constant. The number of elements in an array is n. The maximum depth is approximately A A B B C C D D
Question 148 Explanation: Question 149
Match the following with respect to algorithm paradigms :
```a. Merge sort                      i. Dynamic programming
b. Huffman coding                 ii. Greedy approach
c. Optimal polygon triangulation iii. Divide and conquer
d. Subset sum problem             iv. Back tracking``` Code :
 A A-iii, b-i, c-ii, d-iv B A-ii, b-i, c-iv, d-iii C A-ii, b-i, c-iii, d-iv D B-iii, b-ii, c-i, d-iv
Question 149 Explanation:
• Merge sort is a Divide and conquer algorithm.
• Huffman coding is a Greedy approach.
• Optimal polygon triangulation is Dynamic programming algorithm.
• Subset sum problem is a Back tracking algorithm.
 Question 150
The total number of comparisons in a bubble sort is
 A 0(log n) B 0(n log n) C 0(n) D None of the above
Question 150 Explanation: Question 151
Which of the following is true for a sorted list with ‘n’ elements ?
 A Insertion in a sorted array takes constant time. B Insertion in a sorted linear linked list takes constant time. C Searching for a key in a sorted array can be done in O(log n) time. D Searching for a key in a sorted linear linked list can be done in O(log n) time.
Question 151 Explanation:
Option(A) : False
• In an unsorted array, the insert operation is faster as compared to the sorted array because no need to care about the position at which the element to be placed.
• Binary Search uses sorted array. Time Complexity of Insert Operation takes O(n)
• In the worst case all elements may have to be moved
Option(B) : False
• After each insertion in single list. Sorted order on each step needs to be maintained.
• When we are inserting an element in to empty linked list and to perform sorted order list of every element will take O(n2).
• Each Insertion into a sorted linked list will take θ(n)
• So, Total cost for n operations is θ(n2).
Option(C) : True
• Searching for a key in a sorted array can be done in O(log n) time via binary search algorithm.
Option(D) : False
• Binary Search can be implemented on a linked list. The implementation takes O(n) time.
 Question 152
You are given a sequence of n elements to sort. The input sequence consists of n/k subsequences, each containing k elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length n is to sort the k elements in each of the n/k subsequences. The lower bound on the number of comparisons needed to solve this variant of the sorting problem is
 A Ω(n) B Ω(n/k) C Ω(nlogk ) D Ω(n/klogn/k)
Question 152 Explanation: Question 153
Any decision tree that sorts n elements has height ________.
 A Ω(lg n) B Ω(n) C Ω(n lg n) D Ω(n2)
Question 153 Explanation:
There are 153 questions to complete.

### This Post Has 2 Comments

1. Anonymous

How to access the question file ?

2. Mojtaba

How to access the question file ?