Time is Money: How Time Complexity Impacts the Performance of Your Code

Understanding Time Complexity in simple way..

Table of contents

No heading

No headings in the article.

As a programmer, you're likely familiar with the concept of algorithms, which are sets of instructions for solving a particular problem. But have you ever stopped to consider how efficient your algorithms are? That's where Time complexity comes in.

Time complexity generally refers to the behavior of an algorithm in the worst-case scenario, and not the actual running time of an algorithm

In general, the amount of time an algorithm takes to run as a function of the size of its input is Time complexity.

Time complexity is usually measured in terms of Big O notation, which provides an upper bound on the growth rate of the algorithm's running time as the input size increases.

Time complexity can be used in three scenarios :

1. Worst case: A case where the input takes longer time or the algorithm runs slow

  1. Average case: A case, which takes into account the probability distribution of all possible inputs.

  2. Best Case: A case where the input takes less time or the algorithm runs faster.

    Understanding the Time Complexity with a Simple and easy example

Suppose you are writing the Algo For Searching the Target is Present in an Array or not. Further, consider an array of size 100. Consider it will take 1ms to compare the given element with the target The Three Case Discussed above for the Algorithm will be :

  • Best case: The target element is at the first position, and the algorithm takes 1 ms to find it. The time complexity, in this case, is O(1), which means the algorithm takes constant time regardless of the input size.

  • Average case: The target element is randomly distributed in the array, and on average, it takes size/2 ms to find it. The time complexity, in this case, is O(n/2), which is equivalent to O(n), where n is the size of the input array.

  • Worst case: The target element is at the last position, or it is not present in the array. In both cases, the algorithm needs to compare all elements in the array, which takes 100ms for an array of size 100. The time complexity, in this case, is O(n), which means the algorithm takes linear time as the input size grows.

For an array of size 1000, the worst-case complexity will be 1000ms, and for an array of size 10000, it will be 10000ms. We can observe that the time required for the execution of the function grows linearly with the size of the input array, hence the time complexity is O(n), where n is the size of the input array.

In general, we should always try to understand the worst-case time complexity, to deal with this situation We will take the help of Big O notation.

Big O notation is a way to describe how the time or space needed for an algorithm to solve a problem changes as the size of the problem grows. It is used to compare the efficiency of different algorithms.

The notation uses a mathematical formula to describe how the time or space needed changes. For example, if an algorithm has a time complexity of O(n), it means that the time needed grows linearly with the size of the input. If it has a time complexity of O(n^2), the time needed grows quadratically with the size of the input.

Will see Some Intresting O Notation in Another Blog For Time Being Share your thoughts in comment and Do Follow