Question 1 |
Divide and Conquer
| |
Dynamic Programming
| |
Greedy Algorithm
| |
Branch and Bound
|
Subset Sum Problem - Dynamic Programming
Given a set of Positive integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum.
For Example:
Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 9
Output: True
There is a subset (4, 5) with sum 9.
Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 30
Output: False
There is no subset that add up to 30.
Question 2 |
M*n
| |
Maximum of m and n
| |
Minimum of m and n
| |
M+n–1
|
Best case it is min(m, n) if we consider number of elements in one array is m and n in one array
Worst case it will take m+n-1 comparisons..when alternate element goes into 3rd array which is merged and sorted array.
Number of merges will be same in all cases which is M+N
Question 3 |
Merge Sort
| |
Insertion Sort
| |
Selection Sort
| |
Quick Sort
|
Insertion sort - In Insertion sort if the array is already sorted then it takes O(n) and if it is reverse sorted then it takes O(n2) to sort the array.so it depend upon ordering of input.
Selection sort - The best and worst case performance of Selection is O(n2) only. it does not depend upon ordering of input.
Merge sort - In Merge sort best and worst time complexity o(nlogn). it dosent depend ordering of input. .like array is sorted or not.
Best[merge] = Worst[merge] and Best[selection] = Worst[selection]. Why aren't they equally good answers ?
The best and worst case performance of Selection is O(n2) only. But if the array is already sorted then less swaps take place. In merge sort, time complexity is O(nlogn) for all the cases and performance is affected least on the the order of input sequence.
Question 4 |
T(n)=T(n−1)+T(n−2)−T(n−3), if n>3
= n, otherwise
Then what should be the relation between T(1), T(2) and T(3) , so that the order of the algorithm is constant?
T(1) = T(2) = T(3)
| |
T(1) + T(3) = 2 * T(2)
| |
T(1) – T(3) = T(2)
| |
T(1) + T(2) = T(3)
|
T(n) = T(n−1)+T(n−2)−T(n−3) (Assume Sufficient Large n value)
T(n) = T(n−1)
T(n) = T(n)+T(n−2)−T(n−3)
T(n−2)=T(n−3)
'
'
'
'
T(1)=T(2)=T(3)
Question 5 |
O(n)
| |
O(n log n)
| |
O(n3/2)
| |
O(n3)
|
Warshall's Algorithm
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.
Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n-by-n boolean matrix T, in which the element in the ith row and jth column is 1 if there exist a directed path from the ith vertex to the jth vertex, otherwise it is zero.
Question 6 |
Detailed
| |
Aggregate
| |
Qualitative
| |
None of the above
|
For strategic planning is Aggregate
Question 7 |
Greedy method
| |
Divide-and-conquer
| |
Dynamic Programming
| |
Backtracking
|
The algorithm maintains two subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Example
arr[] = 64 25 12 22 11
# Placing the minimum element in arr[0...4] in the beginning
11 25 12 22 64
# Placing the minimum element in arr[1...4] in the beginning
11 12 25 22 64
# Placing the minimum element in arr[2...4] in the beginning
11 12 22 25 64
# Placing the minimum element in arr[3...4] in the beginning
11 12 22 25 64
Question 8 |
O(n2), O(n2)
| |
O(n2), O(nlog2n)
| |
O(n log2n), O(n2)
| |
O(n log2n), O(n log2n)
|
Question 9 |
O ( log2n )
| |
O ( n2 )
| |
O ( n )
| |
O ( n log2n )
|
Worst-case performance = O(log n)
Best-case performance = O(1)
Average performance = O(log n)
Worst-case space complexity = O(1)
Question 10 |
1, 3, 5, 7, 9, 11, 13
| |
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55
| |
0, 1, 3, 4, 7, 11, 18, 29, 47
| |
0, 1, 3, 7, 15
|
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
The next number is found by adding up the two numbers before it:- the 2 is found by adding the two numbers before it (1+1),
- the 3 is found by adding the two numbers before it (1+2),
- the 5 is (2+3),
- and so on!
Example: the next number in the sequence above is 21+34 = 55
Question 11 |
Both NP-complete
| |
Both in P
| |
NP-complete and in P, respectively
| |
Undecidable and NP-complete, respectively
|
- 3-SAT is NP Complete Problem
- 2-SAT is P Class Problem
Refer this link for more information : http://cstheory.stackexchange.com/questions/6864/why-is-2sat-in-p
Question 12 |
T(n)=2T(n/2)+k , where k is constant
| |
T(n)=T(n/2) +k, where k is constant
| |
T(n)=T(n/2)+logn
| |
T(n)=T(n/2)+n
|
Binary Search is a linear searching algorithm and takes O(log n) when array is sorted.
T(n) = T(n / 2) + k , where k is constant produces a complexity of O(log n)

Recurrence Relation Derivation
Time complexity is - O(log N)
Recurrence relation - T(n)=T(n/2)+1
T(n)=T(n/2) + 1
T(n/2)=T(n/4) + 1 ……
[ T(n/4)= T(n/2^2) ]
T(n/4)=T(n/8) + 1 ……
[ T(n/8)= T(n/2^3) ]
.
.
kth step=> T(n/2^k-1)=T(n/2^k) + 1*(k times)
Adding all the equations we get, T(n) = T(n/2^k) + k times 1 _____eq(final)
=> n/2^k= 1 [So how many times we need to divide by 2 until we have only one element left]
=> n=2^k
=> log n=k [taken log(base 2) on both sides ]
Put k= log n in eq(final)
T(n) = T(1) + log n
T(n) = 1 + log n [we know that T(1) = 1 ,
because it’s a base condition as we are left with only one element in the array and that is the element to be searched so we return 1]
T(n) = O(log n) [taking dominant polynomial, which is n here)
Question 13 |
Insertion Sort
| |
Quick Sort
| |
Heap Sort
| |
Selection Sort
|
Selection Sort is an in-place algorithm having minimum number of swaps. It works on greedy approach and takes O(n) swaps to sort the array of n elements.
Number of swaps in Worst-case scenario
- Quick sort = O(n2)
- Insertion sort = О(n2)
- Selection sort = O(n)
- Heap sort = O(n logn)
Selection sort is preferable if writing to memory is significantly more expensive than reading.
Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. Because it only uses comparisons to operate on elements, it is a comparison sort.
Question 14 |
11
| |
12
| |
13
| |
10
|
2 :8,7,9,22,31,5,13
3 :8,7,9,22,5,31,13
4 :8,7,9,22,5,13,31 = This will take 4 swaps
5 :7,8,9,22,5,13,31
6 :7,8,9,5,22,13,31
7 :7,8,9,5,13,22,31 = This will take 3 swaps
8 :7,8,5,9,13,22,31 = This will take 1 swaps
9 :7,5,8,9,13,22,31 = This will take 1 swap
10 :5,7,8,9,13,22,31 = This will take 1 swap
Total 10 swaps are required to sort the array.
Question 15 |
Prim’s algorithm
| |
Dijkstra's algorithm
| |
Bellman-Ford’s algorithm
| |
Floyd-Warshall’s algorithm
|
Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from source to all the other nodes in a weighted graph.
BellmanFord’s Algorithm :BellmanFord’s Algorithm is used to find the shortest distance from source to all other nodes in a graph that can contain negetive weight edges.
Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.
Question 16 |
O(nlogn)
| |
O(n3/2)
| |
O(n3)
| |
O(n)
|
Warshall's Algorithm
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.
Transitive closure
The transitive closure of a directed graph with n vertices can be defined as the n-by-n boolean matrix T, in which the element in the ith row and jth column is 1 if there exist a directed path from the ith vertex to the jth vertex, otherwise it is zero.
Question 17 |
Dynamic programming
| |
Backtracking
| |
Greedy
| |
Divide and Conquer
|
Foyd Warshall algorithm is used to find All Pairs Shortest Path problem which uses Dynamic Programming to find the shortest path between all the pairs of vertices in a weighted graph. This algorithm works for both the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles (where the sum of the edges in a cycle is negative).
A weighted graph is a graph in which each edge has a numerical value associated with it.
Floyd-Warhshall algorithm is also called as Floyd's algorithm, Roy-Floyd algorithm, Roy-Warshall algorithm, or WFI algorithm.
-This algorithm follows the dynamic programming approach to find the shortest paths.
Question 18 |
1
| |
5
| |
10
| |
20
|
So, for length 5,
5*4/2 = 20/2 = 10 Comparisons
Example: 5 4 3 2 1
1st iteration i will compare 4 number with the 5
2nd iteration i will compare 3 numbers with the 4
3rd iteration i will compare 2 number2 with the 3
4th iteration i will compare 1 number with the 2
so the total number of comparisons is 4+3+2+1 = 10
which can be views as the sum of the sequence of the first (n-1) numbers starting by 1
S = ((1+ (n-1) ) * (n-1) ) / 2
S=n(n-1)/2 comparisons.
Question 19 |
T(n) = 2T(n-1), if n>0
= 1, otherwise
Then T(n) is (in big O order)
O(n)
| |
O(2n)
| |
O(1)
| |
O(log n)
|
T(n) = 2[2T(n−2)]
T(n) =22T(n−2)
T(n)=22[2T(n−3)]
T(n)=23T(n−3)
T(n)= 2kT(n−k)
n−k=0, n=k, T(0)=1
T(n)=2n
Question 20 |
Match the following and choose the correct Solution for the order A, B, C, D
A. | Strassen matrix multiplication | p. | Decrease and Conquer |
---|---|---|---|
B. | Insertion sort | q. | Dynamic Programming |
C. | Guassian Elimination | r. | Divide and Conquer |
D. | Floyd shortest path algorithm | s. | Transform and Conquer |
r, s, p, q
| |
r, p, s, q
| |
q, s, p, r
| |
s, p, q, r
|
Insertion sort uses decrease and conquer approach as its loop invariant condition is at each step, A[1..j-1] contains the first j-1 elements in sorted order.
Gaussian elimination uses transform and conquer approach to solve set of equations.
Foyd Warshall algorithm is used to find All Pairs Shortest Path problem which uses Dynamic Programming to find the shortest path between all the pairs of vertices in a weighted graph.
Question 21 |
N/2
| |
(n-1)/2
| |
(n+1)/2
| |
None of these
|
The average number of comparisons in a sequential search is (N+1)/2 where N is the size of the array.
-If element is at 1 position then it requires 1 comparison.
-If element is at 2 position then it requires 2 comparison.
-If element is at 3 position then it requires 3 comparison.
-Similarly , If element is at n position then it requires n comparison.
Total comparison
= n(n+1)/2
For average comparison
= (n(n+1)/2) / n
= (n+1)/2
Question 22 |
Dynamic programming
| |
Backtracking
| |
Divide and conquer
| |
Greedy method
|
Question 23 |
T(n) = 2T(√n) + 1
T(1) = 1
Which of the following is true?
T(n)= O(log log n)
| |
T(n) = O(log n)
| |
T(n) = O(√n)
| |
T(n)= O(n)
|
T(n)=2T(√n)+1
T(2m)=2T(2m/2)+1
Let T(2m)=s(m)
s(m)=2s(m/2)+1
By Using case 1 of master method
S(m)=Θ(m)
S(m) =Θ(logn) [∴ n=2m]
∴ T(n)=T(2m)=s(m)=Θ(logn)
Question 24 |
L
| |
L/2
| |
(L+1)/2
| |
2L
|
In case of unsuccessful search, the element would be searched until the last element and it would be a worst case when number of searches are equal to the size of the table. so it shoulb be L only.
-In sequential search Successful search needs (L+1)/2,
-In sequential search Unsuccessful search needs L
Question 25 |
O(n)
| |
O(n log n)
| |
O(n2)
| |
O(n3)
|
T(n + 1) = 2n + (2(n-1) + T(n-1)) ∴ [By using substitution method]
T(n + 1) = 2n + (2(n-1) + (2(n-2) + T(n-2)))
T(n + 1) = 2n + (2(n-1) + (2(n-2) + (2(n-3) + T(n-3))))
T(n + 1) = 2n + 2(n-1) + 2(n-2) + 2(n-3)......2(n-(n-1) + T(1))
T(n + 1) = 2n + 2n - 2 + 2n - 4 + 2n - 6 +.... + 10
T(n + 1) = 2[n + n + n + ...] - 2[1 + 2 + 3 +...]
T(n + 1) = 2[n*n] - 2[n(n+1)/2]
T(n + 1) = 2[n*n] - [n*n + n]
T(n + 1) = n*n - n
T(n + 1) = O(n2)
Question 26 |
Greedy method
| |
Backtracking
| |
Dynamic programming
| |
Divide and Conquer
|
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
Question 27 |
Insertion sort, Quick sort
| |
Quick sort, Quick sort
| |
Quick sort, Insertion sort
| |
Insertion sort, Insertion sort
|
Question 28 |
3.00
| |
3.46
| |
2.81
| |
3.33
|
1 2 3 4 5 6 7 8 9 10 11
For convenience we are using Binary search tree. Now apply binary search in the given array .
For first element (which is root) array will be
(1 2 3 4 5 ) 6 (7 8 9 10 11)

Now binary search continue either in lower or in upper halve for searching .
For lower halve (1 2) 3 (4 5 ).
For upper halve (7 8 ) 9 (10 11 ) and continued in same way.

So to search middle element (i.e 6 ) it will take 1 comparison .For 3 or 9 it will take 2 comparison .For 2 , 3 comparison and so on.
Avg No of comparison = ( 1+(2+2)+(3+3+3+3)+(4+4+4+4) ) / 11=3
Question 29 |
The solution of the recurrence relation T(m) = T(3m/4) + 1 is :
Θ(lg m)
| |
Θ(m)
| |
Θ(mlg m)
| |
Θ(lglg m)
|
T(n) = a T(n/b) + Θ(nk logp n) where a > = 1,
b > 1, k > = 0 , and p is real number
Case 1: if a > bk, Then T(n) =Θ(n^(logba))
Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))
Case 3: a < bk,
a) if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)
Given
T(m) = T(3m/4) + 1
T(m) = T(m/(4/3)) + 1
In this problem we use Master’s Theorem:
T(m) = aT(m/b) + nk logp n
which is similar to
T(m) = 1 * T(m/(4/3)) + 1 * n0 log0 n
where a = 1, b = 4/3, k = 0 and p = 0
By using case 2 of Master's Theorem
a = 1 = b0 = bk
∴ a=bk
p > -1 means apply case 2(a) of Master's Theorem
∴ T(m) = θ( m(logba)log(p+1)m)
where a=1, b=4/3, k=0 and p=0
∴ T(m) = θ( m(log4/31)log(0+1)m) (∴ log4/31 = 0 )
∴ T(m) = θ( m0log1m)
∴ T(m) = θ(log m)
Question 30 |
0.08, 0.40, 0.25, 0.15 and 0.12 respectively.
The optimal coding technique will have the average length of :
2.4
| |
1.87
| |
3.0
| |
2.15
|
A → 0.08
B → 0.40
C → 0.25
D → 0.15
E → 0.12

Now Draw a Huffman tree:

length for each character = no of bits * frequency of occurrence:
Length of A = 1110 = 4 * 0.08 = 0.32
Length of B = 0 = 1 * 0.4 = 0.4
Length of C = 10 = 2 * 0.25 = 0.5
Length of D = 110 = 3 * 0.15 = 0.45
Length of E = 1111 = 4 * 0.12 = 0.48
Now add these length to find Average length:
Average length = 0.32 + 0.4 + 0.5 + 0.45 + 0.48 = 2.15
Question 31 |
A binary search tree in which every non-leaf node has non-empty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :
Cannot have more than 37 nodes
| |
Has exactly 37 nodes
| |
Has exactly 35 nodes
| |
Cannot have more than 35 nodes
|
- If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is called a strictly binary tree.
- A strictly binary tree with n leaves always contains 2n -1 nodes.
- If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is termed a strictly binary tree. Or, to put it another way, all of the nodes in a strictly binary tree are of degree zero or two, never degree one.

Question 32 |
A binary search tree in which every non-leaf node has non-empty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :
Cannot have more than 37 nodes
| |
Has exactly 37 nodes
| |
Has exactly 35 nodes
| |
Cannot have more than 35 nodes
|
- If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is called a strictly binary tree.
- A strictly binary tree with n leaves always contains 2n -1 nodes.
- If every non-leaf node in a binary tree has nonempty left and right subtrees, the tree is termed a strictly binary tree. Or, to put it another way, all of the nodes in a strictly binary tree are of degree zero or two, never degree one.

Question 33 |
(a)-(iv), (b)-(i), (c)-(iii), (d)-(ii)
| |
(a)-(iv), (b)-(iii), (c)-(i), (d)-(ii)
| |
(a)-(iii), (b)-(iv), (c)-(ii), (d)-(i)
| |
(a)-(iv), (b)-(iii), (c)-(ii), (d)-(i)
|
- 8-Queens Problem is Backtracking.
- Single-Source shortest Paths is Greedy approach.
- STRASSEN’s Matrix Multiplication is Divide and conquer technique.
- Optimal Binary Search trees is Dynamic programming.
Question 34 |
45
| |
72
| |
360
| |
450
|
- An octal number is a type of number that is composed of 8 possible values (0 to 7).
- But in this Questions They are saying 5 Digit octal Number possible values (0 to 4) it means Number of Digits is 5
where
Digits =5,
Type of Number(Base Value) =8 ,
Number of values=9
Therefore Maximum Number of comparisons are = 5* 8 * 9 = 360
Question 35 |
30
| |
33
| |
45
| |
125
|
following relation holds
L = (k-1)*n + 1
Where
L is the number of leaf nodes
n is the number of internal nodes.
since its a complete k tree, so every internal node will have K child
Given
k = 5
Number of internal nodes n = 8
Number of leaf nodes = (k-1)*n + 1
= (5-1)*8 + 1
= 4 * 8 + 1
= 33
Question 36 |
E is the number of edges in the graph and f is maximum flow in the graph. When the capacities are integers, the runtime of Ford-Fulkerson algorithm is bounded by :
O(E∗f)
| |
O(E2∗f)
| |
O(E∗f2)
| |
None of the above
|
Question 37 |
T(n)=2T(n-1), if n>0.
=1, otherwise
O(nlogn)
| |
O(n2)
| |
O(2n)
| |
O(n)
|
T(n) = 2[2T(n−2)]
T(n) =22T(n−2)
T(n)=22[2T(n−3)]
T(n)=23T(n−3)
T(n)= 2kT(n−k)
n−k=0, n=k, T(0)=1
T(n)=2n
Question 38 |
O(nlogn)
| |
O(logn)
| |
O(n)
| |
O(n2)
|
So we need to check for all leaf nodes for the minimum value. Worst case complexity will be O(n)
Question 39 |
O(n)
| |
O(nlogn)
| |
O(logn) for search and insert, and O(n) for delete
| |
None of these
|
Worst-case time complexity for Binary Search tree operation occurs when the tree is a Skewed tree.i.e when tree goes only in one direction
In skewed Binary Search Tree (BST), all operations Search,update and delete can take O(n).

Best case,
The binary search tree is a balanced binary search tree.
Height of the binary search tree becomes log(n).
So, Time complexity of BST Operations = O(logn).

Question 40 |
Greedy approach
| |
Dynamic programming
| |
Backtracking paradigm
| |
Divide and conquer paradigm
|
- Dijkstra's Algorithm is a single-source shortest path algorithm and aims to find solution to the given problem statement
- This algorithm works for both directed and undirected graphs
- It works only for connected graphs
- The graph should not contain negative edge weights
- The algorithm predominantly follows Greedy approach for finding locally optimal solution.
- The main logic of this algorithm is based on the following formula-
dist[r]=min(dist[r], dist[q]+cost[q][r])
Question 41 |
89,19,50,17,12,15,2,5,7,11,6,9,100
Minimum _____ number of interchanges are needed to convert it into a max-heap
3
| |
4
| |
5
| |
6
|

Question 42 |
Selection sort
| |
Bubble sort
| |
Merge sort
| |
Quick sort
|
Bubble Sort — n2
Quick Sort — n2
Selection Sort — n2
Question 43 |
12
| |
10
| |
6
| |
5
|
Let n = 10 k records. Then Substitute the Values in above condition
10×(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
5 doesn't satisfy this but 6 satisfies.
So ,Correct Answer is C
Question 44 |
Greedy
| |
Branch and bound
| |
Dynamic Programming
| |
Backtracking
|
Back tracking is an approach to problem solving which is usually applied to constraint satisfaction problem like puzzles.
The 4-Queens problem consist of placing four queens on a 4x4 chessboard such that no two queen’s capture the same column and same row and also, they should not be in the same diagonal.
Question 45 |
True,True
| |
False, False
| |
True,False
| |
False,False
|
An algorithm is said to be stable if two equal elements appear in the same order in output(sorted objects list) as they appear in input list(unsorted list)
For example - If we have 5 elements - 4, 7, 2, 8, 2
For our understanding, let’s rename sequence as 4, 7, 2(first occurrence), 8, 2(second occurrence) Then after sorting we will get 2, 2, 4, 7, 8 .But which 2 came first? Which 2 is this? 2(first occurrence) or 2(second occurrence) .If they are in the same order as input ,then such algorithms are called stable algorithm else unstable algorithm.

These sorting algorithms are usually stable
- counting sort
- merge sort
- insertion sort
- Bubble Sort
- Binary Tree Sort.
- quicksort
- heapsort
- selection sort
Question 46 |
Quick sort
| |
Merge sort
| |
Shell sort
| |
Bubble sort
|

Bubble sort procedures is the slowest in all 3 cases
Question 47 |
T(n)=2T(n-2)+2
| |
T(n)=2T(n/2)+1
| |
T(n)=2T(n-2)+n
| |
T(n)=2T(n-1)+1
|
T(1)=1
T(n)=2T(n−1)+1
Question 48 |
O(mn)
| |
O(m)
| |
O(m+n)
| |
O(n)
|
If the edges are already sorted, then there is no need to construct min heap.
So, deletion from min heap time is saved.
In this case, time complexity of Kruskal’s Algorithm = O(E + V)
where E edges and V vertices
Question 49 |
12
| |
10
| |
6
| |
5
|
Let n = 10 k records. Then Substitute the Values in above condition
10×(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
5 doesn't satisfy this but 6 satisfies.
So ,Correct Answer is C
Question 50 |
O(logn)
| |
O(n)
| |
O(nlogn)
| |
None of the above, as the tree cannot be be uniquely determined.
|
2. In BST(binary search tree), inorder traversal is always sorted ordered.
Algorithm to build binary tree from inorder and postorder traversal:
- Take the last element from postorder as root element of binary tree. This takes O(1) time.
- Perform binary search on given sequence 1 to n to find that element in inorder traversal. This takes O(log n) time.
- Now divide inorder traversal using that element using pivot selection.
- Repeat for both split part. This takes T(k) and T(n-k-1) time.
As you have given that tree is binary search tree, so there is no need to perfrom binary search. Simply, we take pivot and split given list from 1 to n element. The recurrence relation would be T(n) = T(k) + T(n-k-1) + O(1) = O(n). O(n) time complexity to build binary search tree using inorder and postorder/preorder travarsal.
Question 51 |
O(n)
| |
O(m)
| |
O(m+n)
| |
O(mn)
|
Its time complexity is Θ(m+n)
Question 52 |
Solves it in linear time using a left to right pass of the array
| |
Solves in linear time using a right to left pass of the array
| |
Solves it using divide and conquer in time O(nlogn)
| |
Solves it in time O(n 2 )
|
Method 1 (Brute Force)
Use two loops. The outer loop runs from 0 to length – 1 and one by one picks all elements from left to right. The inner loop compares the picked element to all the elements to its right side. If the picked element is greater than all the elements to its right side, then the picked element is the leader.
Time Complexity: O(n*n)
Method 2 (Scan the array from right)
Start Traversing from the right side of the array, and keep the highest element in a variable,if we find any element greater then this variable print it and update the variable with the new value.
Time Complexity: O(n)
Question 53 |

(a)-(iii), (b)-(ii), (c)-(iv), (d)-(i)
| |
(a)-(ii), (b)-(iii), (c)-(iv), (d)-(i)
| |
(a)-(i), (b)-(ii), (c)-(iv), (d)-(iii)
| |
(a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
|
The branching factor is the number of children at each node
Solution depth
The length of the shortest path from the initial node to a goal node.
Maximum depth
The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
Note: A leaf is a node with no children.
Depth limit - The unbounded tree problem appeared in DFS can be fixed by imposing a limit on the depth that DFS can reach, this limit we will call depth limit.
- BFS → O(bd) worst case space complexity
- DFS → O(bm) worst case space complexity
- Depth - Limited Search → O(bl)
- Iterative deepening Search → O(bd)
Question 54 |

Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
17
| |
14
| |
16
| |
13
|

2+1+2+1+3+4+2+1=16
Question 55 |

where V and E are the number of vertices and edged in graph respectively.
Code:
(a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
| |
(a)-(i), (b)-(iii), (c)-(ii), (d)-(iv)
| |
(a)-(iii), (b)-(i), (c)-(ii), (d)-(iv)
| |
(a)-(iii), (b)-(i), (c)-(iv), (d)-(ii)
|
Case-1:
This case is valid when-
- The given graph G is represented as an adjacency matrix.
- Priority queue Q is represented as an unordered list.
- A[i,j] stores the information about edge (i,j).
- Time taken for selecting i with the smallest dist is O(V).
- For each neighbor of i, time taken for updating dist[j] is O(1) and there will be maximum V neighbors.
- Time taken for each iteration of the loop is O(V) and one vertex is deleted from Q.
- Thus, total time complexity becomes O(V2).
This case is valid when-
- The given graph G is represented as an adjacency list.
- Priority queue Q is represented as a binary heap.
- With adjacency list representation, all vertices of the graph can be traversed using BFS in O(V+E) time.
- In min heap, operations like extract-min and decrease-key value takes O(logV) time.
- So, overall time complexity becomes O(E+V) x O(logV) which is O((E + V) x logV) = O(ElogV)
- This time complexity can be reduced to O(E+VlogV) using Fibonacci heap.
Floyd-Warshall algorithm:The Floyd-Warshall all-pairs shortest path runs in O(n3) time .This is because we have a triple (3) for loop, each with n iterations to make
Topological sorting: The above algorithm is simply DFS with an extra stack. So time complexity is same as DFS which is O(V+E).
Question 56 |
N + ceil(log n) - 2
| |
N - 1
| |
Log n
| |
3n/1
|
Input : 1 2 3 4 5 6 7 8
output : 2 (2 is the second smallest number)

1 is the smallest, it took n-1 = 7 comparisons.
Now compare all the numbers which 1 was compared to (compare 2,3,5).

2 is the second smallest. It took log(n)-1 = 2 comparisons.
Hence,
N -1 + log(n)-1
= N + log(n)-2
= N + ceil(log n) – 2 ∴ [ when n odd]
Question 57 |
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4
| |
2
| |
3
| |
5
|
If there is no match then max(A[i – 1, j], A[i, j – 1])
longest Common Subsequence Table:
X | 0 | 1 | 2 | 1 | 3 | 0 | 1 | |
Y | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
3 | 0 | 0 | 1 | 1 | 1 | 2 | 2 | 2 |
2 | 0 | 0 | 1 | 2 | 2 | 2 | 2 | 2 |
0 | 0 | 1 | 1 | 2 | 2 | 2 | 3 | 3 |
1 | 0 | 1 | 2 | 2 | 3 | 3 | 3 | 4 |
0 | 0 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
Question 58 |
The solution of recurrence relation : T(n) = 2T(sqrt(n)) + lg(n) is
O(lg (n) lg(n))
| |
O(lg (n))
| |
O(n lg (n))
| |
O(lg (n) lg(lg(n)))
|
T(n)=2T(n1/2)+logn
put n=2k
T(2k)=2T((2k)1/2)+k
T(2k)=2T(2k/2)+k
put T(2k)=S(k)
So S(k)=2S(k/2)+k
by Master Theorem
S(k)=O(k log k)
Hence T(n)=O(logn log logn)
Question 59 |
O(nlogn),O(nlogn) and O(n2)
| |
O(n2),O(n2) and O(nlogn)
| |
O(n2),O(nlogn) and O(nlogn)
| |
O(n2),O(nlogn) and O(n2)
|

Question 60 |
Bellman ford algorithm for single source shortest path
| |
Floyd Warshall for all pairs shortest paths
| |
0-1 knapsack problem
| |
Prim’s minimum spanning tree
|
- Bellman ford algorithm for single source shortest path - Dynamic Programming based
- Floyd Warshall for all pairs shortest paths - Dynamic programming Based
- 0-1 knapsack problem - Dynamic Programming based
- Prim's Minimum Spanning Tree - Greedy Algorithm.
Question 61 |
Maximum sum subsequence in an array
| |
Maximum sum subarray in an array
| |
Maximum product subsequence in an array
| |
Maximum product subarray in an array
|
Given an array of integers, find contiguous subarray within it which has the largest sum.
Example
Input: {-2, 1, -3, 4, -1, 2, 1, -5, 4}
Output: Subarray with the largest sum is {4, -1, 2, 1} with sum 6.
We can easily solve this problem in linear time using kadane’s algorithm. The idea is to maintain maximum (positive sum) sub-array “ending” at each index of the given array. This subarray is either empty (in which case its sum is zero) or consists of one more element than the maximum subarray ending at the previous index.
The time complexity of Kadane’s Algorithm is O(n) and auxiliary space used by the program is O(1)
Question 62 |
O(nlogd)
| |
O(n2 logd)
| |
O(nd)
| |
O(n2d)
|
Maximum shifts is bounded by O(d)
Therefore for n elements total complexity is bounded above by O(n*d)
Question 63 |
Every minimum spanning tree of G must contain emin and emax
| |
If emax is in a minimum spanning tree, then its removal must disconnect G
| |
No minimum spanning tree contains emax
| |
G has a multiple minimum spanning trees
| |
None |
Every minimum spanning tree of G must contain emin but "may or may not" emax.
Kruskal's algorithm always picks the edges in ascending order of their weights while constructing a minimum spanning tree of G.First edge could never create a cycle.
Every minimum spanning tree of G "may or may not" contain emax explained in option(c)
Option(B) : False
If emax is in a minimum spanning tree, then its removal must disconnect G "not always true".
emax would be included in MST "if and only if , emax is a bridge between two connected components" , removal of which will surely disconnect the graph.

Option(C) : False
No minimum spanning tree contains emax
should be written as "may or may not", to be true.
Refer Case 1 in option (B)
Option(D) : False
G has a multiple minimum spanning trees
G has unique edge weights , so MST will be unique . In case if edge weights were repeating , there could've been a possibility of non-unique MSTs.
Question 64 |
Nlog(n/2)
| |
Nlogn
| |
(nlogn)/2
| |
log n |
so in such as case at each level we will have n/2 comparisons
In general they are
= n/2 * number of levels
=n/2 * log2n
=(nlogn)/2
For n=2 only 1 comparison
For n=4 only 4 Comparison and so on


Question 65 |
O(m*n)
| |
O(m+n)
| |
O(m2)
| |
O(mn)
|
Time complexity:
The algorithm is built upon DFS and therefore, each node is visited once and only once. For each node, we perform some constant amount of work and iterate over its adjanceny list. Thus, the complexity is O(|V|+ |E|)
At maximum, the depth of recursion and the size of stack can be n nodes. Thus the complexity is O(|V|)
Question 66 |
Insertion sort
| |
Merge sort
| |
Quick sort
| |
Bubble sort
|

Question 67 |
P
| |
NP
| |
Both (A) and (B)
| |
None of these
|
This follows from the following three facts.
1)The complement of an NP-complete problem is co-NP-complete;
2)if any co-NP-complete problem is in a class C that is closed under polynomial-time reductions, then every co-NP⊆C;
3)NP is closed under polynomial-time reductions.
Question 68 |
T(n)=T(n-1)+1/n, if n>1
=1, otherwise
The order of this algorithm is
Logn
| |
N
| |
N 2
| |
N n
|
T(n-1)= T(n−2) + 1/(n−1) substitute in above equation
T(n)=T(n-2)+ 1/(n-1) + 1/n
T(n)=T(n-3) + 1/(n-2) + 1/(n-1) + 1/n
T(n)=T(n-4) + 1/(n-3) + 1/(n-2) + 1/(n-1) + 1/n
'
'
'
'
'
T(n) =T(n-k) + 1/n-(k-1) + 1/n-(k-2) + 1/n-(k-3) + .........+1/n
Assume T(1) =1
Substitute k = n-1
T(n) =T(1) + 1/(n-(n-2)) + 1/(n-(n-3)) + 1/(n-(n-4)) + 1/(n-(n-5))+......+1/n
T(n)=1 + 1/2 + 1/3 + 1/4 + 1/5 +...........+1/n [Which is in Harmonic series]
Therefore T(n)=O(logn)
Question 69 |
Place reference to them in and array an sort the array
| |
Place them in a linked list and sort the linked list
| |
Place pointers to them in an array and sort the array
| |
Place them in an array and sort the array
|
Question 70 |
b= branching factor
d = depth of the shallowest solution
m= maximum depth of the search tree
l=depth limit

(a)-(iii), (b)-(ii), (c)-(iv), (d)-(i)
| |
(a)-(ii), (b)-(iii), (c)-(iv), (d)-(i)
| |
(a)-(i), (b)-(ii), (c)-(iv), (d)-(iii)
| |
(a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
|
The branching factor is the number of children at each node
Solution depth
The length of the shortest path from the initial node to a goal node.
Maximum depth
The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
Note: A leaf is a node with no children.
Depth limit - The unbounded tree problem appeared in DFS can be fixed by imposing a limit on the depth that DFS can reach, this limit we will call depth limit.
- BFS → O(bd) worst case space complexity
- DFS → O(bm) worst case space complexity
- Depth - Limited Search → O(bl)
- Iterative deepening Search → O(bd)
Question 71 |
11
| |
12
| |
10
| |
9
|
It is given that:
The nodes with degree 1 is 4. ( 4 nodes having 1 child,therefore, degree = internal nodes + the root = 1+1=2)
The nodes with degree 2 is 3. ( 3 nodes having 2 child, therefore, degree = internal nodes + the root = 2+1=3)
The nodes with degree 3 is 3. ( 3 nodes having 3 children,therefore, degree = internal nodes + the root = 3+1=4)
1 root node with degree 3(number of children and because ternary and root has no parent --> degree == 3 + 0 = 3)
Let the number of leaf nodes be N. The degree of those leaf nodes is 1 because they are only attached to their parent.
Total number of nodes in tree = internal nodes + leafs node = 4 + 3 + 3 + N = 10 + N.
Therefore, the total number of edges = total number of nodes - 1 = 9+N.
For ternary trees: Sum of degrees of all vertices = 2*total number of edges.
(1*3) + (2*4) + (3*3) + (4*2) + (N*1) = 2(9+N)
3 + 8 + 9 + 8 + N = 18 + 2N
N = 10.
Question 72 |

Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
17
| |
14
| |
16
| |
13
|

2+1+2+1+3+4+2+1=16
Question 73 |
N + ceil(lg n) -2
| |
N-1
| |
Lg n
| |
3n/1
|
Input : 1 2 3 4 5 6 7 8
output : 2 (2 is the second smallest number)

1 is the smallest, it took n-1 = 7 comparisons.
Now compare all the numbers which 1 was compared to (compare 2,3,5).

2 is the second smallest. It took log(n)-1 = 2 comparisons.
Hence,
N -1 + log(n)-1
= N + log(n)-2
= N + ceil(log n) – 2 ∴ [ when n odd]
Question 74 |
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4
| |
2
| |
3
| |
5
|
If there is no match then max(A[i – 1, j], A[i, j – 1])
longest Common Subsequence Table:
X | 0 | 1 | 2 | 1 | 3 | 0 | 1 | |
Y | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
3 | 0 | 0 | 1 | 1 | 1 | 2 | 2 | 2 |
2 | 0 | 0 | 1 | 2 | 2 | 2 | 2 | 2 |
0 | 0 | 1 | 1 | 2 | 2 | 2 | 3 | 3 |
1 | 0 | 1 | 2 | 2 | 3 | 3 | 3 | 4 |
0 | 0 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
Question 75 |
given below :
(1) Input (x 1 ,y 1 ) and (x 2 ,y 2 )
(2) y=y 1
(3) d=f(x 1 +1, y 1 +1⁄2) // f is the implicit form of a line
(4) for x=x 1 to x 2
(5) do
(6) plot(x,y)
(7) if(d<0)
(8) then
(9) y=y+1
(10) d=d+(y 1 - y 2 ) + (x 2 - x 1 )
(11) else
(12) d=d+(y 1 - y 2 )
(13) end
(14) end
Which statements are true ?
P: For a line with slope m>1, we should change the outer loop in line (4) to be over y.
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.
R: The algorithm fails if d is ever 0.
Q and R only
| |
P only
| |
P and Q only
| |
P,Q and R
|
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.True
R: The algorithm fails if d is ever 0. False beacuse The algorithm works only if d is ever 0
Question 76 |
Splitting
| |
Completion
| |
Beginning
| |
Joining
|
Question 77 |
O(lg (n) lg(n))
| |
O(lg (n))
| |
O(n lg (n))
| |
O(lg (n) lg(lg(n)))
|
T(n)=2T(n1/2)+logn
put n=2k
T(2k)=2T((2k)1/2)+k
T(2k)=2T(2k/2)+k
put T(2k)=S(k)
So S(k)=2S(k/2)+k
by Master Theorem
S(k)=O(k log k)
Hence T(n)=O(logn log logn)
Question 78 |
Optimization
| |
NP complete
| |
Linear Solution
| |
Sorting
|
- 0/1 knapsack problem belongs to a class of "NP problems", which stands for “nondeterministic polynomial time."
- Fractional knapsack belongs to a class of "P class problem".
Refer : https://en.wikipedia.org/wiki/Knapsack_problem
Question 79 |
No. of inputs
| |
Arrangement of elements in an array
| |
Size of elements
| |
Pivot element
|
Since the partition() method depends on the pivot, we conclude that:
the value of the pivot will affect the performance of Quick Sort.
The Quick Sort algorithm performs the worst when the pivot is the largest or smallest value in the array
Reason:
When the pivot is the largest or smallest value in the array, the partition() step will do nothing to improve the ordering of the array elements
The Quick Sort algorithm performs the best when the pivot is the middle value in the array
Reason:
In the best case, the input array is divided into 2 arrays each containing approximately n/2 element
Question 80 |
Processor and Memory
| |
Complexity and Capacity
| |
Time and Space
| |
Data and Space
|
- Space complexity : The amount of memory space an algorithm needs to run to completion
- Time complexity : The amount of computer time an algorithm needs to run to completion
Question 81 |
O(log n)
| |
O(n)
| |
O(n logn)
| |
None of these
|
T(n) = a T(n/b) + Θ(nk logp n) where a > = 1,
b > 1, k > = 0 , and p is real number
Case 1: if a > bk, Then T(n) =Θ(n^(logba))
Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))
Case 3: a < bk,
a) if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)
Given
T(n) = T(n/2) + n
Which is similar to
T(n) = 1*T(n/2) + n1 log0 n
a=1,b=2,k=1,p=0
Case 3 of master theorm a < bk i.e,. 1 < 21
p > =0, then
T(n) = Θ(nk logpn))
T(n) = Θ(n1 log0 n)
T(n) =Θ(n)
Question 82 |
It can be used to decide the best algorithm that solves a given problem
| |
It is the lower bound of the growth rate of algorithm
| |
It determines the maximum size of a problem that can be solved in a given amount of time
| |
Both (A) and (B)
|
- Big O notation is one of the most fundamental tools for computer scientists to analyze the cost of an algorithm.
- Big-O is the upper, Big-Omega is lower-bound, and Big-Theta is bound by both sizes,
Question 83 |

19
| |
18
| |
17
| |
20
|

Given Weight = 8
as per the algorithms we took item which have high profit/weigh ratio
item 1 have high profit ration therfore we took item1 profit = 3 remaning weight 8-1 = 7
next item 2 high profit ratio therfore we took item2 profit = 5 remaning weight 7-2 = 5
now we have two options item 3 or 5 because they have same profit/weigt ratio but item 5 weight is 8 but our remeaining capacity is only 5 that's why dont pick item 5
if we pick item 3 profit = 9 remaning weight 5-4 = 1 (1 is wasted)
so we pick item 4 therfore item4 profit = 11 remaning weight 5-5 = 0
so we pick item 1,2,4 =3+5+11 =19
total pofit =19
Question 84 |
void Function(int n)
{
int i=1;
int s=1;
while(s<=n)
{
i++;
s=s+i;
}
}
O(n)
| |
O(n 2 )
| |
O(1)
| |
O(√n)
|
"i value" increases by 1 in each iteration
The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates
if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Question 85 |
50.2 sec
| |
6.7 sec
| |
72.7 sec
| |
11.2 sec
|
so 1000 x log(1000) = 9000 comparisons, which takes 100 sec.
To sort 100 names a minimum of 100 log(100)=600 comparisons are needed.
we get running time 100 x 600/9000 = 6.7 sec
Question 86 |
F3, F4, F1, F5, F6, F2
| |
F2, F6, F5, F1, F4, F3
| |
F1, F2, F3, F4, F5, F6
| |
Ordering is immaterial as all files are accessed with the same frequency.
|
- This question is based on the Optimal Storage on Tape problem
- Greedy approach is used to find the optimal time to retrieve them.
- The files are to be stored sequentially on tape.
- To read a particular file we need to start from beginning of tape..
- here goal is to find such order of storage that cost to access file is minimum..in order to achieve the files are stored in increasing order of length
Question 87 |
int recursive (int n)
{
if(n == 1)
return (1);
else
return (recursive (n-1) + recursive (n-1));
}
O(n)
| |
O(n log n)
| |
O(n2)
| |
O(2n)
|


Question 88 |
25
| |
75
| |
35
| |
22×5
|
Then Total number of spanning trees is n(n-2)
where n is the number of nodes in the graph.
- So No of spanning tree possible in complete graph with 7 node=75
Question 89 |
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
| |
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
| |
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
| |
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
|

Question 90 |
16
| |
8
| |
4
| |
15
|
- So No of spanning tree possible in complete graph with 4 node=42 =16
Question 91 |
Stack
| |
Binary search tree
| |
Tree
| |
Linked list
|
Question 92 |
w=30, w1=10, p1=11, w2=15, p2=12, w3=20, p3=24, w4=25, p4=25
Largest profit per unit weight first
| |
Largest profit first
| |
Lightest item first
| |
Heaviest item first
|
Question 93 |
Θ(n2.5)
| |
Θ(n)
| |
Θ(n2)
| |
Θ(n)
|
Question 94 |
N2 /2
| |
N(n-1)/2
| |
N2
| |
N(n+1)/2
|
Complexity Analysis of Bubble Sort
In Bubble Sort,n-1
comparisons will be done in the 1st pass, n-2
in 2nd pass, n-3
in 3rd pass and so on. So the total number of comparisons will be,(n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1
Sum = n(n-1)/2
i.e O(n2)
Hence the time complexity of Bubble Sort is O(n2).
Also, the best case time complexity will be O(n), it is when the list is already sorted.
Following are the Time and Space complexity for the Bubble Sort algorithm.
- Worst Case Time Complexity [ Big-O ]: O(n2)
- Best Case Time Complexity [Big-omega]: O(n)
- Average Time Complexity [Big-theta]: O(n2)
- Space Complexity: O(1)
Question 95 |
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
N2
| |
Nn
| |
N3
| |
N
|
T(n) = a T(n/b) + Θ(nk logp n) where a > = 1,
b > 1, k > = 0 , and p is real number
Case 1: if a > bk, Then T(n) =Θ(n^(logba))
Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))
Case 3: a < bk,
a) if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)
Given,
T(n) = 8T(n/2) + qn
In this problem we use Master’s Theorem:
T(n)=aT(n/b)+θ(nklogpn)
Which is similar to
T(n) = 8T(n/2) + qn
where a=8,b=2,k=1,p=0, and a>bk
Hence
θ(nlogba) → θ(nlog28)=θ(n3)
Question 96 |
f(int Y[10], int x) { int u,j,k; i=0;j=9; do { k=(i+j)/2; if(Y[k]<x) i=k; else j=k; } while((Y[k]!=x) && (i<j)); if(Y[k] ==x) printf("x is in the array"); else printf("x is not in the array"); }On which of the following contents of 'Y' and 'x' does the program fail ?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10
| |
Y is [1 3 5 7 9 11 13 15 17 19] and x<1
| |
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2
| |
Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even |
- 4
- 6
- 7
- 8−y[8]=9, x found
- 4, y[4]=10, x found
option(D) printing mistake : Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
Question 97 |
Max(f(n),g(n))
| |
Min(f(n),g(n))
| |
F(n)+g(n)
| |
F(n) * g(n)
|
N2 such that
T(n) ≤ c1 x f(n), for all n ≥n1
T(n) ≤ c2 x f(n), for all n≥n2
N= max (n1 n2) and c= max (c1, c2 ). So,
T(n) ≤ cx f(n) for all n ≥N
T(n) ≤ c x g(n) for all n≥N
T(n) ≤ c/2 x (f (n) + g(n))
Without loss of generality let max (f(n), g(n)= f(n) )
So, T(n) ≤ c/2 (f(n) + f(n) ≤ cx f(n)
So, order is f(n), which is max (f(n), g(n), by assumption
Explanation 2 :
3 Cases are possible hear
Case 1 : f(n) < g(n)
g(n) is highr order term, f(n) is lower order term, so we ignore f(n) and we consider only g(n)
Case 2 :f(n) > g(n)
g(n) is lower order term, f(n) is higher order term, so we ignore g(n) and we consider only f(n)
Case 3 : f(n) == g(n)
We consider either O(g(n)) or O(f(n)) which is equal asymptotically.
Note :
O( f(n) * g(n) ) = O( max( f(n), g(n) ) )
O( max( f(n), g(n) ) ) != max(f(n), g(n))
Question 98 |
Ω(logn)
| |
Ω(nlogn)
| |
Ω(n)
| |
Ω(n2)
|
Question 99 |
Greedy
| |
Dynamic
| |
Divide and conquer
| |
Backtracking
|
- Start in the leftmost column
- If all queens are placed return true
- Try all rows in the current column. Do following for every tried row.
(a) If the queen can be placed safely in this row then mark this [row, column] as part of the solution and recursively check if placing queen here leads to a solution.
(b) If placing the queen in [row, column] leads to a solution then return true.
(c) If placing queen doesn't lead to a solution then umark this [row, column] (Backtrack) and go to step (a) to try other rows. - If all rows have been tried and nothing worked, return false to trigger backtracking.
Question 100 |
Divide and conquer
| |
Backtracking
| |
Heuristic approach
| |
Greedy approach
|
Idea:
- Divide the unsorted list into N sublists, each containing 1 element.
- Take adjacent pairs of two singleton lists and merge them to form a list of 2 elements. N will now convert into N/2 lists of size 2.
- Repeat the process till a single sorted list of obtained.
Reference : https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/tutorial/#:~:text=Merge%20sort%20is%20a%20divide,into%20sublists%2C%20each%20containing%20element.
Question 101 |
M*n
| |
Minimum of m,n
| |
Maximum of m,n
| |
M+n-1
|
Assuming m < n, at least m comparisons and at most n+m-1 comparisons (worst case). So assuming all elements of the smallest list come first, the minimum number of comparisons is min (n, m). Assuming that by simplest you mean best case, then Option(B) is the correct answer. Option(D) is correct for the worst case
Question 102 |
Bubble sort and Insertion sort
| |
Bubble sort and Quicksort
| |
Bubble sort and merge sort
| |
Quicksort and merge sort
|
Question 103 |
int IsPrime (n)
{
int i, n;
for (i=2; i<=sqrt(n);i++)
if(n%i == 0)
{ printf("Not Prime \n"); return 0; }
return 1;
}
Let T(n) denotes the number of times the for loop is executed by the program on input n.
Which of the following is TRUE?
T(n) = O(√n) and T(n) = Ω (√n)
| |
T(n) = O(√n) and T(n) = Ω (1)
| |
T(n) = O(n) and T(n) = Ω (√n)
| |
None of the above
|
(1)when loop terminates with the condition n%i =0 for the first time
(2)for loop in the question run maximum √n time and minimum 1 time
therefore,
T(n) = O( √n) and T(n) = Ω(1)
Question 104 |
A) Dijkstra's algorithm
B) Floyd's algorithm
C) prim's algorithm
D) Warshall's algorithm
Both A and B
| |
Both A and C
| |
Both C and D
| |
Both B and D
|
Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from source to all the other nodes in a weighted graph.
BellmanFord’s Algorithm :BellmanFord’s Algorithm is used to find the shortest distance from source to all other nodes in a graph that can contain negetive weight edges.
Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.
Robert Floyd & Stephen Warshall two scientist :-from there name a single dynamic algorithm Floyd-Warshall
Note:-they are not 2 different algorithms.. So, they should not given in two options
Reference : https://en.m.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
Question 105 |
Max(f(n),g(n))
| |
Min(f(n),g(n))
| |
F(n)+g(n)
| |
F(n) * g(n)
|
N2 such that
T(n) ≤ c1 x f(n), for all n ≥n1
T(n) ≤ c2 x f(n), for all n≥n2
N= max (n1 n2) and c= max (c1, c2 ). So,
T(n) ≤ cx f(n) for all n ≥N
T(n) ≤ c x g(n) for all n≥N
T(n) ≤ c/2 x (f (n) + g(n))
Without loss of generality let max (f(n), g(n)= f(n) )
So, T(n) ≤ c/2 (f(n) + f(n) ≤ cx f(n)
So, order is f(n), which is max (f(n), g(n), by assumption
Explanation 2 :
3 Cases are possible hear
Case 1 : f(n) < g(n)
g(n) is highr order term, f(n) is lower order term, so we ignore f(n) and we consider only g(n)
Case 2 :f(n) > g(n)
g(n) is lower order term, f(n) is higher order term, so we ignore g(n) and we consider only f(n)
Case 3 : f(n) == g(n)
We consider either O(g(n)) or O(f(n)) which is equal asymptotically.
Note :
O( f(n) * g(n) ) = O( max( f(n), g(n) ) )
O( max( f(n), g(n) ) ) != max(f(n), g(n))
Question 106 |
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
N 2
| |
N n
| |
N 3
| |
N
|
T(n) = a T(n/b) + Θ(nk logp n) where a > = 1,
b > 1, k > = 0 , and p is real number
Case 1: if a > bk, Then T(n) =Θ(n^(logba))
Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))
Case 3: a < bk,
a) if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)
Given,
T(n) = 8T(n/2) + qn
In this problem we use Master’s Theorem:
T(n)=aT(n/b)+θ(nklogpn)
Which is similar to
T(n) = 8T(n/2) + qn
where a=8,b=2,k=1,p=0, and a>bk
Hence
θ(nlogba) → θ(nlog28)=θ(n3)
Question 107 |
f(int Y[10], int x) { int u,j,k; i=0;j=9; do { k=(i+j)/2; if(Y[k]<x) i=k; else j=k; } while((Y[k]!=x) && (i<j)); if(Y[k] ==x) printf("x is in the array"); else printf("x is not in the array"); }On which of the following contents of 'Y' and 'x' does the program fail ?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10
| |
Y is [1 3 5 7 9 11 13 15 17 19] and x<1
| |
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2
| |
Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even |
- 4
- 6
- 7
- 8−y[8]=9, x found
- 4, y[4]=10, x found
option(D) printing mistake : Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
Question 108 |
Linear search
| |
Hash search
| |
Binary search
| |
All of these
|
-Actually the binary search assumes that middle value contains the median of array. If it is not sorted, this assumption does not make sense, since the median can be anywhere and cutting the array in half could mean that you cut off the number you were searching for.
Question 109 |
A process of rearranging a given set of objects in a specific order
| |
To facilitate the later search for members of the sorted set
| |
Is a relevant and essential activity, particularly in data processing
| |
All of these
|
The most common uses of sorted sequences are:
- making lookup or search efficient;
- making merging of sequences efficient.
- enable processing of data in a defined order.
Question 110 |
Insertion sort
| |
Heap sort
| |
Merge sort
| |
Bubble sort
|
Question 111 |
O(nlogn)
| |
O(n)
| |
O(logn)
| |
O(n*n)
|
- Linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.
- A linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list. If each element is equally likely to be searched, then linear search has an average case of n+1/2 comparisons
- The worst case complexity is O(n), sometimes known an O(n) search
Question 112 |
2
| |
10
| |
8
| |
5
|
Given n = 32
=log232 = 5
∴ maximum number of comparisons = 5
Question 113 |
T(n) = 2 + T(n - 1), where T(0)=2, T(n)=?
N2
| |
2n+2
| |
Log(n)
| |
2n
|
https://academyera.com/gate-jtit-2016-part-b-computer-science-recurrences
Question 114 |
Satisfiable
| |
Non Solvable
| |
Decision
| |
Optimization
|
Reference : https://en.wikipedia.org/wiki/Travelling_salesman_problem#:~:text=The%20travelling%20purchaser%20problem%20and,class%20of%20NP%2Dcomplete%20problems.
Question 115 |
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
Insertion sort
| |
Selection sort
| |
Quick sort
| |
Bubble sort
|
- Starting from the first index, compare the first and the second elements.If the first element is greater than the second element, they are swapped. Now, compare the second and the third elements. Swap them if they are not in order. The above process goes on until the last element.
- The same process goes on for the remaining iterations. After each iteration, the largest element among the unsorted elements is placed at the end. In each iteration, the comparison takes place up to the last unsorted element. The array is sorted when all the unsorted elements are placed at their correct positions.
for i < -1 to indexOfLastUnsortedElement-1
if leftElement > rightElement
swap leftElement and rightElement
end bubbleSort
Question 116 |
2*log2 n
| |
2*log2 n+1
| |
Log2 2n
| |
2* Log2 n+1 |
This leads to a total of 2*log_2(n-1) comparisons, which asymptotically indeed equals to O(log(n))
Question 117 |
Floor ((i+1)/2)
| |
Ceiling ((i+1)/2)
| |
Floor (i/2)
| |
Ceiling (i/2)
|
For the binary heap element stored at index i of the array,
-Parent Node will be at index: floor(i/2)
-Left Child will be at index: 2i
-Right child will be at index: 2*i + 1
Question 118 |
3
| |
4
| |
5
| |
6
|

Question 119 |
Every minimum spanning tree of G must contain E min.
| |
If E max is in minimum spanning tree, then its removal must disconnect G.
| |
No minimum spanning tree contains E max.
| |
G has a unique minimum spanning tree.
|
Every minimum spanning tree of G must contain Emin always TRUE.
In Kruskal's algorithm always picks the edges in ascending order of their weights while constructing a minimum spanning tree of G.Option(A) is always True because first edge could never create a cycle.
Option(B) : True
If Emax is in a minimum spanning tree, then its removal must disconnect G "not always False".
emax would be included in MST "if and only if , emax is a bridge between two connected components" , removal of which will surely disconnect the graph.

Option(C) : False
No minimum spanning tree contains emax
should be written as "may or may not"
Refer Case 1 in option (B)
Option(D) : True
G has a unique minimum spanning tree
G has unique edge weights , so MST will be unique . In case if edge weights were repeating , there could've been a possibility of non-unique MSTs.
Question 120 |
O(n log n)
| |
O(n 2 log n)
| |
O(n 2 + log n)
| |
O(n 3 )
|
-Each level involves n comparisons
-Single comparison takes O(n) time in worst case.
-So time complexity of this Merge Sort will be O (n2 log n) .
Question 121 |
3, 4, 5, 7, 9, 14, 20, 18, 17, 16, 15
| |
20, 18, 17, 16, 15, 14, 3, 4, 5, 7, 9
| |
20, 18, 17, 16, 15, 14, 9, 7, 5, 4, 3
| |
3, 4, 5, 7, 9, 14, 15, 16, 17, 18, 20
|
* This is because inorder traversal can be obtained by sorting the given result in increasing order.
* Inorder traversal of a BST always gives elements in increasing order. Among all four options(D) is the only increasing order sequence.
Question 122 |
(~ (p ∧ q)) ↔ (~ p ∨ ~ q)
Then, the prefix expression and post-fix expression determined using this ordered tree are given as ____ and _____ respectively.
↔ ~ ∧pq∨ ~ ~ pq, pq∧~p~q~∨↔
| |
↔ ~ ∧pq∨ ~ p ~ q, pq∧~p~q~∨↔
| |
↔ ~ ∧pq∨ ~ ~ pq, pq∧~p~~q∨↔
| |
↔ ~ ∧pq∨ ~ p~ q, pq∧~p~ ~q∨↔
|
root = ↔
Left subtree = (~(p ∧ q))
Right subtree = (~ p ∨ ~ q)

Prefix exp: ↔∼∧pq∨∼p∼q
Postfix exp: pq∧∼p∼q∼∨↔
(B) option Correct answer
Question 123 |
What is the output of given ‘C’ expression ? x * 3 && 3 || j | k
-1
| |
0
| |
1
| |
2
|
x * 3 && 3 || j | k
(x*3)&&3 || j | k
0&&3 || (j | k)
0&&3 || 1
(0&&3) || 1
0||1
1
Question 124 |
O(lg n)
| |
O(n lg n)
| |
O(n)
| |
O(n 2 )
|
* Inorder traversal takes O(n) time complexity in worst case
Question 125 |
Both time and space complexities are better in recursive than in non-recursive program.
| |
Both time and space complexities are better in non-recursive than in recursive program
| |
Time complexity is better in recursive version but space complexity is better in non-recursive version of the program.
| |
Space complexity is better in recursive version but time complexity is better in non-recursive version of the program.
|
→Recursion does not save spaces or time it is just the simplest and efficient method to solve problem as somewhere sub routines are stored
→Recursion may not offer any better space or time complexity it only offers compact and simple to write and understand than non-recusion implementation
Option(b) is correct Answer
Question 126 |

It shows which of the following depth first forest?
{a, b, e} {c, d, f, g, h}
| |
{a, b, e} {c, d, h} {f, g}
| |
{a, b, e} {f, g} {c, d} {h}
| |
{a, b, c, d} {e, f, g} {h}
|

Check for the Cycles in the graph, you will find that node A,B,E form one cycle. node F,G form another cycle. Node C,D form another cycle. and H is having self loop
{a,b,e} {f,g} {c,d} {h}
option(A) is correct Answer
Question 127 |
O (log 2 n)
| |
O (n log 2 n)
| |
O (1)
| |
O (n)
|
Question 128 |
The item to be searched is in somewhere middle of the array
| |
The item to be searched is not in the array
| |
The item to be searched is in the last of the array
| |
The item to be searched is either in the last or not in the array
|
- Best case occurs in the Linear Search Algorithm when the item to be searched is in starting of the Array.
- Worst case occurs in the Linear Search Algorithm when the item to be searched is in end of the Array.
Question 129 |
Counting microseconds
| |
Counting number of key operations
| |
Counting number of statements
| |
Counting kilobytes of algorithm
|
- Time Factor − Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
- Space Factor − Space is measured by counting the maximum memory space required by the algorithm.
Question 130 |
Selection sort
| |
Bubble sort
| |
Radix sort
| |
Insertion sort
|
- Selection sort takes O(n2) time.
- Bubble sort takes O(n2) time.
- Radix sort takes O(n) time.
- Insertion sort takes O(n2) time
Question 131 |
Bubble sort
| |
Selection sort
| |
Quick sort
| |
Insertion sort
|
Selection sort: sorted array as input is best case so it will take O(n2) Time Complexity
Quick sort: sorted array as input is worst case so it will take O(n2) Time Complexity
Insertion sort: sorted array as input is best case for this sort. Best case Complexity is O ( n )
Question 132 |
O(n!) and O(n log n)
| |
O(nn) and O(n log n)
| |
O(n!) and O(log n!)
| |
O(nn) and O(log n!)
|
n!=1*2*3*4*5*6...*n-1*n <= n*n*n*n..........*n=(n)n =O((n)n)
Given log(n!) = log(n)n
=n log n
=O(n log n)
Question 133 |
The solution of the recurrence relation T(m) = T(3m/4) + 1 is :
Θ(lg m)
| |
Θ(m)
| |
Θ(mlg m)
| |
Θ(lglg m)
|
T(n) = a T(n/b) + Θ(nk logp n) where a > = 1,
b > 1, k > = 0 , and p is real number
Case 1: if a > bk, Then T(n) =Θ(n^(logba))
Case 2: a = bk
a) if p > -1, Then T(n) = Θ( n(logba)log(p+1)n)
b) if p = -1, Then T(n) = Θ( n(logba)loglogn)
c) if p < -1, Then T(n) = Θ( n(logba))
Case 3: a < bk,
a) if p > =0, then T(n) = Θ(nk logp n))
b) if p<0 then T(n) = Θ(nk)
Given
T(m) = T(3m/4) + 1
T(m) = T(m/(4/3)) + 1
In this problem we use Master’s Theorem:
T(m) = aT(m/b) + nk logp n
which is similar to
T(m) = 1 * T(m/(4/3)) + 1 * n0 log0 n
where a = 1, b = 4/3, k = 0 and p = 0
By using case 2 of Master's Theorem
a = 1 = b0 = bk
∴ a=bk
p > -1 means apply case 2(a) of Master's Theorem
∴ T(m) = θ( m(logba)log(p+1)m)
where a=1, b=4/3, k=0 and p=0
∴ T(m) = θ( m(log4/31)log(0+1)m) (∴ log4/31 = 0 )
∴ T(m) = θ( m0log1m)
∴ T(m) = θ(log m)
Question 134 |
Prim’s algorithm
| |
Floyd - Warshall algorithm
| |
Johnson’s algorithm
| |
Dijkstra’s algorithm
|
Floyd-Warshall’s Algorithm : Floyd Warshall’s Algorithm is used for solving All-pair-shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.
Johnson’s algorithm : Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted, directed graph.
It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist.
It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph
Time complexity of Floyd Warshall Algorithm is Θ(V3). Using Johnson’s algorithm, we can find all pair shortest paths in O(V2log V + VE) time.
Johnson’s algorithm uses both Dijkstra and Bellman-Ford as subroutines.
Dijikstra’s Algorithm : Dijikstra’s Algorithm is used to find the shortest path from a source to all the other nodes in a weighted graph. that why name came into picture single source shortest path.
Question 135 |
0.08, 0.40, 0.25, 0.15 and 0.12 respectively.
The optimal coding technique will have the average length of :
2.4
| |
1.87
| |
3.0
| |
2.15
|
A → 0.08
B → 0.40
C → 0.25
D → 0.15
E → 0.12

Now Draw a Huffman tree:

length for each character = no of bits * frequency of occurrence:
Length of A = 1110 = 4 * 0.08 = 0.32
Length of B = 0 = 1 * 0.4 = 0.4
Length of C = 10 = 2 * 0.25 = 0.5
Length of D = 110 = 3 * 0.15 = 0.45
Length of E = 1111 = 4 * 0.12 = 0.48
Now add these length to find Average length:
Average length = 0.32 + 0.4 + 0.5 + 0.45 + 0.48 = 2.15
Question 136 |
(a)-(iv),(b)-(i), (c)-(iii), (d)-(ii)
| |
(a)-(iv),(b)-(iii), (c)-(i), (d)-(ii)
| |
(a)-(iii),(b)-(iv), (c)-(ii), (d)-(i)
| |
(a)-(iv),(b)-(iii), (c)-(ii), (d)-(i)
|
- 8-Queens Problem is Backtracking.
- Single-Source shortest Paths is Greedy approach.
- STRASSEN’s Matrix Multiplication is Divide and conquer technique.
- Optimal Binary Search trees is Dynamic programming.
Question 137 |
45
| |
72
| |
360
| |
450
|
- An octal number is a type of number that is composed of 8 possible values (0 to 7).
- But in this Questions They are saying 5 Digit octal Number possible values (0 to 4) it means Number of Digits is 5
where
Digits =5,
Type of Number(Base Value) =8 ,
Number of values=9
Therefore Maximum Number of comparisons are = 5* 8 * 9 = 360
Question 138 |
Bubble
| |
Shake
| |
Tree
| |
Insertion
|

Question 139 |
int module(int n) { if (n == 1) return 1; else return (n + module(n-1)); }
O(n)
| |
O(log n)
| |
O(n2)
| |
O(n!)
|
n+module(n-1)Where,
module(n-1) is Recursive call
n + module(n-1) is a constant addition.
Recurrence Relation = T(n)= T(n−1)+c
= T(n) = T(n−1)+c
=T(n−2)+c+c=T(n−2)+2c
=T(n−k)+k×c
∴ n−k=1 ⇒ k=n−1
=T(n−n+1)+(n−1)∗c
=T(1)+(n−1)∗c
=O(n)
Question 140 |
b= branch factor
d= depth of shallowest solution
M= Maximum depth of the search tree
I= depth limit

Code:
(a)-(iii), (b)-(ii), (c)-(iv), (d)-(i)
| |
(a)-(ii), (b)-(iii), (c)-(iv), (d)-(i)
| |
(a)-(i), (b)-(ii), (c)-(iv), (d)-(iii)
| |
(a)-(i), (b)-(iii), (c)-(iv), (d)-(ii)
|
Breadth-first search goes through the tree level by level, visiting all of the nodes on the top level first, then all the nodes on the second level, and so on. This strategy has the benefit of being complete (if there's a solution, it will be found), and optimal as long as the shallowest solution is the best solution. However, the way breadth-first search achieves this is by keeping all of the leaf nodes in memory, which requires a prohibitive amount of memory when searching anything more than a very small tree.
Time complexity : Time complexity of breadth-first search is O(bd) where b is the branching factor and d is the depth of the solution.
Space Complexity : Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).
Depth-First Search
Depth-first search goes through the tree branch by branch, going all the way down to the leaf nodes at the bottom of the tree before trying the next branch over. This strategy requires much less memory than breadth-first search, since it only needs to store a single path from the root of the tree down to the leaf node. However, it is potentially incomplete, since it will keep going on down one branch until it finds a dead-end, and it is nonoptimal -- if there's a solution at the fourth level in the first branch tried, and a solution at the second level in the next one over, the solution at the fourth level will be returned.
Time complexity : Time complexity of depth-first search is O(bm) where b is the branching factor (2 for the binary trees below) and m is the maximum depth of the tree.
Space complexity : Space complexity is given by b*m.
Try searching for "a" and see which of the two solutions is found when using depth-first search.
Depth-Limited Search
Depth-limited search essentially does a depth-first search with a cutoff at a specified depth limit. When the search hits a node at that depth, it stops going down that branch and moves over to the next one. This avoids the potential problem with depth-first search of going down one branch indefinitely. However, depth-limited search is incomplete → if there is a solution, but only at a level deeper than the limit, it will not be found. It is also nonoptimal in the same way as depth-first search is.
Time complexity : Time complexity of depth-first search is O(bl) where b is the branching factor (2 for the binary trees below) and l is the depth limit.
Space complexity : Space complexity is given by b*l.
The depth limit in this instance of the algorithm is 2. (The depth of the root node is 0.)
Iterative Deepening Search
Iterative deepening does repeated depth-limited searches, starting with a limit of zero and incrementing once each time. As a result, it has the space-saving benefits of depth-first search, but is also complete and optimal, since it will visit all the nodes on the same level first before continuing on to the next level in the next round when the depth is incremented.
Time complexity : Time complexity of iterative deepening search is O(bd) where b is the branching factor (2 for the binary trees below) and d is the depth of the solution.
Space complexity : Space complexity is given by O(bd).
Question 141 |
O(log n)
| |
O(n)
| |
O(1)
| |
(n2)
|
→The time required to search an element in a linked list of length n is O(n).
→Linked lists hold two main pieces of information (the value and pointer) per node. This means that the amount of data stored increases linearly with the number of nodes in the list.
→ Therefore, the space complexity of the linked list is linear is O(n).
Question 142 |
Recursion
| |
Iteration
| |
Looping
| |
Structures
|
Question 143 |
Knight's tour
| |
Eight Queens problem
| |
Kings problem
| |
Rooks problem
|
- The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal.
- The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n = 2 and n = 3
Question 144 |
Knight's tour
| |
Eight Queens problem
| |
Kings problem
| |
Rooks problem
|
- The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal.
- The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n = 2 and n = 3
Question 145 |
Logarithmic
| |
Linear
| |
Quadratic
| |
Exponential |
- For "n" boolean variables there are 2n rows in truth table.
- For determines whether the boolean function produces a output 1 for the given function, in worst case it needs to check every possible row → O(2n) which is Exponential
Question 146 |
Complete graph
| |
Hamiltonian graph
| |
Euler graph
| |
None of the above
|
When a graph has cycle, then it may or may not have the unique spanning tree.
Hence option(D) is correct answer.
Question 147 |
O(1) | |
O(Log n) | |
O(n) | |
O(n 2 ) |
Question 148 |

A | |
B | |
C | |
D |

Question 149 |
a. Merge sort i. Dynamic programming b. Huffman coding ii. Greedy approach c. Optimal polygon triangulation iii. Divide and conquer d. Subset sum problem iv. Back tracking

Code :
A-iii, b-i, c-ii, d-iv | |
A-ii, b-i, c-iv, d-iii | |
A-ii, b-i, c-iii, d-iv | |
B-iii, b-ii, c-i, d-iv |
- Merge sort is a Divide and conquer algorithm.
- Huffman coding is a Greedy approach.
- Optimal polygon triangulation is Dynamic programming algorithm.
- Subset sum problem is a Back tracking algorithm.
Question 150 |
0(log n) | |
0(n log n) | |
0(n) | |
None of the above |

Question 151 |
Insertion in a sorted array takes constant time. | |
Insertion in a sorted linear linked list takes constant time. | |
Searching for a key in a sorted array can be done in O(log n) time. | |
Searching for a key in a sorted linear linked list can be done in O(log n) time. |
- In an unsorted array, the insert operation is faster as compared to the sorted array because no need to care about the position at which the element to be placed.
- Binary Search uses sorted array. Time Complexity of Insert Operation takes O(n)
- In the worst case all elements may have to be moved
- After each insertion in single list. Sorted order on each step needs to be maintained.
- When we are inserting an element in to empty linked list and to perform sorted order list of every element will take O(n2).
- Each Insertion into a sorted linked list will take θ(n)
- So, Total cost for n operations is θ(n2).
- Searching for a key in a sorted array can be done in O(log n) time via binary search algorithm.
- Binary Search can be implemented on a linked list. The implementation takes O(n) time.
Question 152 |
Ω(n) | |
Ω(n/k) | |
Ω(nlogk ) | |
Ω(n/klogn/k) |

Question 153 |
Ω(lg n) | |
Ω(n) | |
Ω(n lg n) | |
Ω(n2) |
How to access the question file ?
How to access the question file ?