Asymptotic Worst Case Time and Time Complexity MCQ Quiz - Objective Question with Answer for Asymptotic Worst Case Time and Time Complexity - Download Free PDF

Last updated on Jun 25, 2025

Latest Asymptotic Worst Case Time and Time Complexity MCQ Objective Questions

Asymptotic Worst Case Time and Time Complexity Question 1:

Assume that P and NP are different i.e. P! = NP then for the expression NP-Complete ∩ P = ? Which among the following is correct ? 

  1. NP-Hard 
  2. P
  3. NP-Complete

Answer (Detailed Solution Below)

Option 2 : ∅

Asymptotic Worst Case Time and Time Complexity Question 1 Detailed Solution

 

Correct Answer: Option 2) ∅

Key Points:

  • NP-Complete problems are those that are:
    • In NP (verifiable in polynomial time), and
    • As hard as any problem in NP (i.e., every NP problem reduces to it in polynomial time).
  • If a problem is both in P and NP-Complete, then all NP problems can be solved in polynomial time, i.e., P = NP.
  • But it is given that P ≠ NP, so no NP-Complete problem can be in P.

Therefore, the intersection of P and NP-Complete is empty (∅).

Additional Information:

  • NP-Hard: Includes problems at least as hard as NP problems but not necessarily in NP (not even decidable).
  • P: Problems solvable in polynomial time.
  • NP: Problems whose solutions can be verified in polynomial time.
  • NP-Complete: Subset of NP problems that are also NP-Hard.

Hence: If P ≠ NP, no NP-Complete problem lies in P ⇒ NP-Complete ∩ P = ∅

Asymptotic Worst Case Time and Time Complexity Question 2:

Match List - I with List - II

LIST I

Algorithms 

LIST II

Complexity 

A.

Bellman - Ford algorithm (with adjacency list representation)

I.

O (|V|2)

B.

Dijkstra Algorithm

II.

O((V + E)log V)

C.

Prim's Algorithm

III.

O(nm)

D.

Topological sorting (with adjacency list representation)

IV.

O(n + m)

Choose the correct answer from the options given below:

  1. (A) - (III), (B) - (I), (C) - (II), (D) - (IV) 
  2. (A) - (II), (B) - (IV), (C) - (III), (D) - (I)
  3. (A) - (III), (B) - (IV), (C) - (I), (D) - (II) 
  4. (A) - (II), (B) - (I), (C) - (III), (D) - (IV)

Answer (Detailed Solution Below)

Option 1 : (A) - (III), (B) - (I), (C) - (II), (D) - (IV) 

Asymptotic Worst Case Time and Time Complexity Question 2 Detailed Solution

The correct answer is (A) - (III), (B) - (I), (C) - (II), (D) - (IV)

  • A (Bellman-Ford Algorithm) → III (O(nm))
  • B (Dijkstra Algorithm) → I (O(|V|²))
  • C (Prim's Algorithm) → II (O((V + E) log V))
  • D (Topological Sorting) → IV (O(n + m)) 

Key Points

  1. Bellman-Ford Algorithm → O(nm) Iterates |V| - 1 times over E edges, giving O(VE), which is O(nm) in dense graphs.

  2. Dijkstra’s Algorithm → O(|V|²) Using an adjacency matrix, it scans all vertices, leading to O(V²) time complexity.

  3. Prim’s Algorithm → O((V + E) log V) Uses a priority queue (min-heap) with adjacency list, giving O((V + E) log V).

  4. Topological Sorting → O(n + m) Uses DFS or Kahn's algorithm, iterating over V vertices and E edges, resulting in O(V + E)

important-point-imageImportant Points

  • Bellman-Ford algorithm is used for finding the shortest paths from a single source vertex to all other vertices in a weighted graph.
  • Dijkstra's algorithm is also used for finding the shortest path in a weighted graph but does not work with negative weight edges.
  • Prim's algorithm is used to find the minimum spanning tree of a weighted graph.
  • Topological sorting is used for ordering the vertices of a directed acyclic graph (DAG).

additional-information-imageAdditional Information

  • The Bellman-Ford algorithm can handle negative weights, unlike Dijkstra's algorithm.
  • Prim's algorithm and Dijkstra's algorithm have similar structures, but they solve different problems.
  • Topological sorting is crucial for tasks scheduling, such as in project management.

Asymptotic Worst Case Time and Time Complexity Question 3:

There are n unsorted arrays: A1, A2, ..., An. (Assume that n is odd) Each of A1, A2, ..., An contains n distinct elements. There are no common elements between any two arrays. The worst-case time complexity of computing the median of the medians of A1, A2, ..., An is 

  1. O(n) 
  2. O(nlogn)
  3. O(n2
  4.  O(n2/logn)

Answer (Detailed Solution Below)

Option 3 : O(n2

Asymptotic Worst Case Time and Time Complexity Question 3 Detailed Solution

The correct answer is O(n2)

Explanation:

Total number of unsorted arrays is n and each array contain n distinct element.

Time complexity to find median from an array is O(n).

Since there is ‘n’ such array. Therefore, total time complexity to find medians of all arrays is O(n2)

Store the ‘n’ medians in an array. Find the medians of the array with time complexity of 0(n)

Total Time complexity:

T(n) = O(n2) + O(n) = O(n2)

Asymptotic Worst Case Time and Time Complexity Question 4:

Which of the given options provides the increasing order of asymptotic complexity of functions f1, f2, f3 and f4?

A. f2(n) = n3/2

B. f1(n) = 2n

C. f4(n) = nlog n

D. f3(n) = n log n

Choose the correct answer from the options given below

  1. B, C, D, A 
  2. C, B, A, D
  3. B, C, A, D
  4. D, A, C, B

Answer (Detailed Solution Below)

Option 4 : D, A, C, B

Asymptotic Worst Case Time and Time Complexity Question 4 Detailed Solution

The correct answer is option 4.

Explanation: 

Here are some common Big O notations from the smallest to the largest complexity:

  • O(1): Constant-time complexity. The running time is independent of the input size, it always takes the same time to compute.
  • O(log n): Logarithmic time complexity. The running time grows logarithmically in proportion to the input size.
  • O(n): Linear time complexity. The running time grows linearly with the size of the input.
  • O(n log n): Log-linear or Linearithmic time complexity. Running time grows linearly and in the order of the logarithm of the size of the input.
  • O(n^2): Quadratic time complexity. The running time is proportional to the square of the size of the input data.
  • O(n^3): Cubic time complexity. The running time is proportional to the cube of the size of the input data.
  • O(2^n): Exponential time complexity. The resources needed for algorithm execution double with each addition to the input data set.
  • O(n!): Factorial time complexity. The running time grows in proportion to the factorial of the input size. This is typical of algorithms that solve problems by generating all possible combinations.

So when you say "all types of asymptotic complexity increasing order", we could be talking about a set of functions like:

  • f1(n) = 1 (O(1))
  • f2(n) = log n (O(log n))
  • f3(n) = n (O(n))
  • f4(n) = n log n (O(n log n))
  • f5(n) = n^2 (O(n^2))
  • f6(n) = n^3 (O(n^3))
  • f7(n) = 2^n (O(2^n))
  • f8(n) = n! (O(n!))

So, according to above explanation:

  •  f1(n) = 2n
  •  f2(n) = n3/2
  •  f3(n) = n log n 
  •  f4(n) = nlog n

The functions in ascending order: n log n  <  n3/2 < nlog n   <   2n

Asymptotic Worst Case Time and Time Complexity Question 5:

Which of the following is NOT an NP-Complete problem? 

  1. Traveling Salesman Problem
  2. Boolean Satisfiability Problem
  3. Shortest Path Problem
  4. More than one of the above
  5. None of the above

Answer (Detailed Solution Below)

Option 3 : Shortest Path Problem

Asymptotic Worst Case Time and Time Complexity Question 5 Detailed Solution

The correct answer is Shortest Path Problem.

Key Points

  • The Shortest Path Problem is a classic problem in graph theory that involves finding the shortest path between two vertices in a weighted graph.
    • One common algorithm used to solve this problem is Dijkstra's algorithm, which efficiently finds the shortest paths from a single source vertex to all other vertices in a graph with non-negative edge weights.
    • Another well-known algorithm is the Bellman-Ford algorithm, which can handle graphs with negative edge weights and detect negative cycles.
    • The Shortest Path Problem is not considered an NP-Complete problem because it can be solved in polynomial time using these algorithms.
    • In contrast, NP-Complete problems, such as the Traveling Salesman Problem and the Boolean Satisfiability Problem, are known to be computationally difficult and no polynomial-time solution is known for them.

Additional Information

  • NP-Complete problems are a class of problems for which no known polynomial-time algorithms exist, and they are believed to be inherently difficult to solve.
  • Other well-known NP-Complete problems include the Knapsack Problem, the Hamiltonian Cycle Problem, and the Vertex Cover Problem.
  • The concept of NP-Completeness is central to the field of computational complexity theory, which studies the inherent difficulty of computational problems and the resources required to solve them.
  • Proving that a problem is NP-Complete involves demonstrating that it is both in NP (nondeterministic polynomial time) and that every problem in NP can be reduced to it in polynomial time.

Top Asymptotic Worst Case Time and Time Complexity MCQ Objective Questions

Consider the recurrence function \(T\left( n \right) = \left\{ {\begin{array}{*{20}{c}} {2T\left( {\sqrt n } \right) + 1,\;\;\;n > 2}\\ {2,\;\;\;0 < n \le 2} \end{array}} \right.\) Then T(n) in terms of Θ notation is

  1. Θ (log log n)
  2. Θ (log n)
  3. Θ (√n)
  4. Θ (n)

Answer (Detailed Solution Below)

Option 2 : Θ (log n)

Asymptotic Worst Case Time and Time Complexity Question 6 Detailed Solution

Download Solution PDF

\(T\left( n \right) = 2\;T\left( {\sqrt n } \right) + 1\)

Put n= 2m, m = log2n

\(T\left( {{2^m}} \right) = 2T\left( {{2^{\frac{m}{2}}}} \right) + 1\)

\(Put\;T\left( {{2^m}} \right) = S\left( m \right)\)

\(S\left( m \right) = 2\;S\;\left( {\frac{m}{2}} \right) + 1\)

Now, calculate \({n^{log_b^a}}\)

\({m^{log_b^a}} = m\)

S (m) = m + 1

S(m) = m

Put the value of m 

Therefore, T(n) in terms of Θ notation is Θ (logn).

What is the time complexity for the following C module? Assume that n > 0;

int module(int n)

{

if(n == 1)

return 1;

else

return (n + module(n-1));

}

  1. O(n)
  2. O(log n)
  3. O(n2)
  4. O(n!)

Answer (Detailed Solution Below)

Option 1 : O(n)

Asymptotic Worst Case Time and Time Complexity Question 7 Detailed Solution

Download Solution PDF

Answer: Option 1

Explanation

f(n) =  n + module(n-1);  / since return (n + module(n-1));

Here recurrence relation will be

T(n) = T(n-1) + c

≡ T(n-2) + c + c

≡ T(n-3) + 3c

≡ T(n-k) + kc

when n-k = 1 ; k = n-1 and T(1) = 1

≡ T(1) + (n-1)c

≡ (n-1)c

≡ O(n) 

Hence Option 1 is correct answer.

Here some of you might have cam up with recurrence relation T(n) = T(n-1) + n, which is incorrect because in function n is just a variable it will have a constant value which will get added to call return value and In recurrence variable n indicate cost.  

Which one of the following correctly determines the solution of the recurrence relation with

T(1) = 1?

\(T\left( n \right) = 2T\left( {\frac{n}{2}} \right) + \log n\)

  1. Θ(n)
  2. Θ(n log n)
  3. Θ (n2)
  4. Θ(log n)

Answer (Detailed Solution Below)

Option 1 : Θ(n)

Asymptotic Worst Case Time and Time Complexity Question 8 Detailed Solution

Download Solution PDF

\({\rm{T}}\left( {\rm{n}} \right) = 2{\rm{T}}\left( {\frac{{\rm{n}}}{2}} \right) + \log {\rm{n}}\)

Comparing with:

\({\rm{T}}\left( {\rm{n}} \right) = {\rm{aT}}\left( {\frac{{\rm{n}}}{{\rm{b}}}} \right) + f\left( n \right)\)

a = 2, b = 2, f(n) = log n

\({n^{{{\log }_2}2}} = n\) > f(n)

By Master’s theorem

\(T\left( {\rm{n}} \right) = {\rm{O}}\left( {\rm{n}} \right)\)

Consider the following C function.

int fun1(int n) {

int i, j, k, p, q = 0;

for (i = 1; i n; ++i) {

p = 0;

for (j = n; j > 1; j = j/2)

 ++ p;

for (k = 1; k<p; k = k*2)

 ++ q;

}

return q;

}

Which one of the following most closely approximates the return value of the function fun1?

  1. n3
  2. n(log2)
  3. n log n
  4. n log (log n)

Answer (Detailed Solution Below)

Option 4 : n log (log n)

Asymptotic Worst Case Time and Time Complexity Question 9 Detailed Solution

Download Solution PDF

zza17

\(\begin{array}{l} \therefore p\; = \;\log n,\;q\; = \;\log p\; = \;\log \left( {\log n} \right)\\ \therefore q\; = \;n.\log \left( {\log n} \right) \end{array}\)

Give asymptotic upper and lower bound for T(n) given below. Assume T(n) is constant for \(n \le 2.\;T\left( n \right) = 4T\left( {\sqrt n } \right) + lo{g^2}n\)

  1. T(n) = θ(lg(lg2n)lg n)
  2. T(n) = θ(lg2nlgn)
  3. T(n) = θ(lg2 n lg lg n)
  4. T(n) = θ(lg(lg n))g n) 

Answer (Detailed Solution Below)

Option 3 : T(n) = θ(lg2 n lg lg n)

Asymptotic Worst Case Time and Time Complexity Question 10 Detailed Solution

Download Solution PDF

\(T\left( n \right) = 4T\left( {√ n } \right) + lo{g^2}n\)

Consider n = 2m

m = log2n

n = 2m

Taking square root

√n = 2m/2

T(2m) = 4T(2m/2) + m2

Put S(m) = T (2m/2)

S(m) = 4 S(m/2) + m2

Comparing with

T(m) = a T(m/k) + cmk

a = 4, b = 2, k = 2

a = bk = 4

Now use master’s theorem :

T(m) = O(mklog m)

So, Time complexity will be O(m2log m)

Put the value of m

Time complexity will be :  O (log2n log (logn))

Consider the following C function.

int fun(int n) {

            int i, j;

            for (i = 1; i < = n; i++) {

            for (j = 1; j < n; j + = i) {

                                    printf (‘’ %d %d’’, i, j);

            }

            }

}

Time complexity of fun in terms of Θ notation is

  1. Θ (n√n)
  2. Θ (n2)
  3. Θ (nlog n)
  4. Θ (n2log n)

Answer (Detailed Solution Below)

Option 3 : Θ (nlog n)

Asymptotic Worst Case Time and Time Complexity Question 11 Detailed Solution

Download Solution PDF

We have to check how many times inner loop will be executed here.

For i=1,

j will run 1 + 2 + 3 + …………. (n times)

For i=2

j will run for 1,3,5, 7, 9, 11,………..(n/2 times)

For i=3

j will run for 1,4,7,10, 13…………… (n/3 times)

So, in this way,

\(T\left( n \right) = n + \frac{n}{2} + \frac{n}{3} + \frac{n}{4} + \frac{n}{5} + \frac{n}{6} \ldots + \frac{n}{n}\)

\(= n\left( {1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} \ldots + \frac{1}{n}} \right)\;\)

So, Time complexity of given program = Θ (n log n)

Let f(n) = n and g(n) = n(1 + sin n), where n is a positive integer. Which of the following statements is/are correct?

I. f(n) = O(g(n))

II. f(n) = Ω(g(n))

  1. Only I
  2. Only II
  3. Both I and II
  4. Neither I nor II

Answer (Detailed Solution Below)

Option 4 : Neither I nor II

Asymptotic Worst Case Time and Time Complexity Question 12 Detailed Solution

Download Solution PDF

Concept:

Sin function value ranges from -1 to + 1. (-1, 0, 1)

Explanation:

Case 1: when sin(n) is -1,

g(n) = n(1 - 1) = n0 = 1

so, for this case f(n) > g(n) i.e. g(n) = O (f(n))

So, statement 1 is incorrect.

Case 2: when sin(n) is +1,

g(n) = n(1 + 1) = n2

so, for this case f(n) < g(n) i.e. f(n) = O(g(n))

but for this, second statement i.e. f(n) = Ω(g(n)) is incorrect.

Both statements are not true for all values of sin(n).

Hence, option 4 is correct answer.

The following multithreaded algorithm computes transpose of a matrix in parallel:

p Trans (X, Y, N)

if N = 1

then Y[1, 1] ← X [1, 1]

else partition X into four (N/2) × (N/2) submatrices X11, X12, X21, X22

partition Y into four (N/2) × (N/2) submatrices Y11, Y12, Y21, Y22

spawn p Trans (X11, Y11, N/2)

spawn p Trans (X12, Y12, N/2)

spawn p Trans (X21, Y21, N/2)

spawn p Trans (X22, Y22, N/2)

What is the asymptotic parallelism of the algorithm?

  1. T1/T or θ (N2/lg N)
  2. T1/T or θ (N/lg N) 
  3. T1/T or θ (1g N/N2)
  4. T1/T θ (lg N/N)

Answer (Detailed Solution Below)

Option 1 : T1/T or θ (N2/lg N)

Asymptotic Worst Case Time and Time Complexity Question 13 Detailed Solution

Download Solution PDF

During transpose of the matrix, it is partitioned into 4 submatrices of size N/2.

If n = 1, then T(1) = 1

Else T1(n) = 4T1(N/2) + O(1)       ----- (1)

Solving equation 1 by master’s theorem:

\({n^{log_b^a}} = \;{n^{log_2^4}} = {n^2}\), here a= 4 and b = 2

So, T1(n) = O(N2)

But the algorithm is multithreaded, and transposing is going in parallel. And we solved one subproblem already.

So, Tn(N) = Tn(N/2) + O(1)

Tn(N) becomes O(log N)

asymptotic parallelism of the algorithm:

T1/T or θ (N2/lg N)

The given diagram shows the flowchart for a recursive function A(n). Assume that all statements, except for the recursive calls, have O(1) time complexity. If the worst-case time complexity of this function is O(nα), then the least possible (accurate up to two decimal position) of α is _______.

F1 R.S Deepak 17.12.2019 D 3

Answer (Detailed Solution Below) 2.2 - 2.4

Asymptotic Worst Case Time and Time Complexity Question 14 Detailed Solution

Download Solution PDF

To find the worst-case time complexity of the function.

Worst case happens in case of recursive. So, find out the recursive calls on the longer route.

There are 5 such recursive calls to A (n/2).

So, recurrence relation will become like:

A (n) = 5A (n/2) + O(1)

where O(1) is constant

By using master’s theorem,

a = 5, b =2

\({\rm{O}}\left( {{{\rm{n}}^{{{\log }_2}5}}} \right) = {\rm{\;O}}({{\rm{n}}^{2.32}})\)

\({\rm{O}}({{\rm{n}}^{\rm{\alpha }}}) = {\rm{O}}({{\rm{n}}^{2.32}}){\rm{\;}}\)

α = 2.32

Identify the two types of efficiencies that are important for computer algorithms.

  1. Time efficiency and Higher power efficiency
  2. Higher power efficiency and Computational efficiency
  3. Computational efficiency and Space efficiency
  4. Space efficiency and Time efficiency

Answer (Detailed Solution Below)

Option 4 : Space efficiency and Time efficiency

Asymptotic Worst Case Time and Time Complexity Question 15 Detailed Solution

Download Solution PDF

The correct answer is Space efficiency and Time efficiency.

Key Points

  • In computer science, algorithmic efficiency is a property of an algorithm that relates to the number of computational resources used by the algorithm.
  • An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources
  • Two areas are important for performance:
    • Space efficiency - a measure of the amount of memory needed for an algorithm to execute.
    • Time efficiency - a measure of the amount of time for an algorithm to execute.

Additional Information

 In addition, every algorithm must satisfy the following criteria:

  • input: there are zero or more quantities that are externally supplied;
  • output: at least one quantity is produced;
  • definiteness: each instruction must be clear and unambiguous;
  • finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will terminate after a finite number of steps;
  • effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by a person using only pencil and paper. It is not enough that each operation is definite, but it must also be feasible. 
Get Free Access Now
Hot Links: teen patti rummy teen patti master update teen patti online game teen patti pro