Skip to content

Time Complexities and Fast IO

Harshal Agarwal edited this page Mar 3, 2022 · 1 revision

Introduction to Time Complexities

Thinking of an efficient algorithm to solve a problem is one of the hardest tasks in competitive programming. Assessing your algorithm's efficiency is of upmost importance. This is where Time complexities are useful. They help us decide whether certain algorithms can solve the problem within the required time and space constraints. Time Complexity is one of the most important metrics of comparison for Algorithms.

For example consider two different sorting algorithms, bubble and quick sort. Bubble Sort has a time complexity of O(N2) whereas Quick Sort has a complexity of O(NlogN). Thus, we conclude that quick sort is more efficient than bubble sort. We will get into Big O notation later in this tutorial.

A lot of you must have faced an issue by now where your code passes some of the test cases, and for some you get an error called TLE (Time Limit Exceeded). This is caused by non-efficient code. Every testcase comes with a time limit. Your code needs to produce an output for the given test case in that time limit for it to pass the case. If you encounter this error you need to make your code more efficient. Let us look at a sample code to understand complexities better.

Example

int func(int n)
{
    int sum=0;              //line 1
    for(int i=0;i<n;i++)    //line 2
        sum += i;           //line 3
    return sum;             //line 4
}

Let us now compute the number of times a statement is executed for each call of the function.

Line 1 and Line 4 are single statements and hence execute in one unit of time each.

Line 2 has a for loop which executes two statements, a check statement and an increment statement. The check statement runs exactly n+1 times (It stops once the condition is false). The increment statement runs for exactly n times. This line contributes 2n + 1 to the total runtime.

Line 3 has two operations, addition and assignment. Each of these operations takes 1 unit time each. Since the loop runs n times, this line contributes 2n to the total runtime.

So, the total runtime can be written as a function of the input size n. The function would be

f(n) = 4n + 3

We can see that the function is linear with respect to the input size n.

Similarly now lets look at Bubble Sort:

void bubbleSort(int arr[], int n)  
{  
    int i, j;                               //line1
    for (i = 0; i < n-1; i++)               //line2
    {                                       //line3
        for (j = 0; j < n-i-1; j++)         //line4
        {                                   //line5
            if (arr[j] > arr[j+1])          //line6
                swap(&arr[j], &arr[j+1]);   //line7
        }                                   //line8
    }  
}

Here n is the size of the array.

Let us now compute the number of times a statement is executed for each call of the function.

Line 1 is a single statement and hence executes in one unit of time each.

Line 2 has a for loop which executes another for loop. So for every execution of the first for loop, the second for loop is executed. The second for loop, at most will run n-1 times for each execution of the first for loop.

The first for loop is executed n times. For each time the first for loop is executed, the second for loop, at most, is executed n-1 times.

Thus the total runtime can be written as a function of the array size squared, i.e. n2.

Big O notation

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required by an algorithm.

We use the Big O notation for analyzing algorithms because it gives us an upper bound, and we are mostly concerned with the worst case scenarios.

Let us look at the most common complexities to understand Big O better.

O(1) or Constant Time

This is when the algorithm runs in constant time, i.e doesn't depend on the size of the input.

 int start = 10;
 int end = 20;
 int mid;
 mid = (start + end)/2;

O(n) or Linear Time

This is when the runtime of the algorithm grows linearly with the size of the input. For example,

for(int i=0;i<n;i++)
    cout << i << endl;

Ο(N2) or Quadratic Time

This is when the runtime of the algorithm grows quadratically with respect to the size of the input. For example,

for(int i=0;i<n;i++)
{
    for(int j=0;j<n;j++)
        cout << i+j << endl;
}

Ο(logN) or Logarithmic Time

This is when the algorithm breaks down into two subproblems in each step. For example, the binary search algorithm follows Ο(log N) complexity.

int binsearch(int arr[],int low,int high)
{
    int mid = (low + high)/2;
    if(item == arr[mid])
        return mid;
    if(item > arr[mid])
        return binsearch(arr,mid+1,high);
    return binsearch(arr,low,mid-1);
}

The logarithm is base 2, i.e, log2N

Fast IO for Competitive Programming

In competitive programming, it is important to read input as fast as possible so we save valuable time.

You must have seen various problem statements saying: “Warning: Large I/O data, be careful with certain languages (though most should be OK if the algorithm is well designed)”. Key for such problems is to use Faster I/O techniques.

For such problems, the I/O data is extremely large and what we require is fast IO.

C++

Most of you who use C++ would be using cin-cout for IO. However you might find it surprising to know that scanf and printf are actually faster than cin-cout. In fact, scanf is actually almost 5 times faster. You can check out some comparisons here.

This is because the C++ library includes the C library, it needs to support the C and C++ I/O at the same time. To synchronize the buffers, cin is synchronizing itself with the underlying C-library’s stdio buffer. As a result, it runs slightly slower. Although it’s slower, it lets you freely mix C++ and C I/O. However there's a work around for this.

ios_base::sync_with_stdio(false); //line1
cin.tie(NULL);                    //line2

Line 1 toggles on or off the synchronization of all the C++ standard streams with their corresponding standard C streams if it is called before the program performs its first input or output operation. Adding ios_base::sync_with_stdio (false); (which is true by default) before any I/O operation avoids this synchronization. It is a static member of function of std::ios_base.

tie() is a method which simply guarantees the flushing of std::cout before std::cin accepts an input. This is useful for interactive console programs which require the console to be updated constantly but also slows down the program for large I/O. The NULL part just returns a NULL pointer.

Just add these two lines at the beginning of main() and you are good to go!

Java

Most of you who use Java would be using the Scanner class for IO. It is easy to implement and requires a lot less typing, however it is also very slow. This is because scanner involves parsing of tokens. What this means is that Scanner breaks what you parse into tokens, the criteria being whitespaces, tabs, new lines, etc.

There are a few workarounds, check them out:

We would recommend you make a practice of using Fast IO methods. Sometimes it can happen that no warning is provided and you would be left wondering why your code is not working.

Further Reading

  1. Chapter 1,3 of Introduction to Algorithms, Second Edition, by Thomas Cormen, Charles Leiserson, Ronald Rivest, Clifford Stein.
  2. https://www.topcoder.com/community/data-science/data-science-tutorials/computational-complexity-section-1/
  3. https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/
  4. http://bigocheatsheet.com/- contains complexities of standard sorting algorithms and standard data structures
  5. Part 1, Part 2, Part 3 - video examples to understand the calculations of time complexity better.