L-1.6 - Time Complexities of all Searching and Sorting Algorithms in 10 minute _ GATE & other Exams

**Sorting Algorithms: Time and Space Complexity**

In computer science, time complexity refers to the amount of time an algorithm takes to complete as a function of the size of the input. Sorting algorithms are a crucial part of many applications, and understanding their time and space complexities is essential for efficient programming.

**Heap Sort**

Heap sort is a comparison-based sorting algorithm that uses a heap data structure. A heap is a specialized tree-based data structure that satisfies the heap property: the parent node is either greater than (in a max heap) or less than (in a min heap) its child nodes. Heap sort has a time complexity of O(N log N), which means that the running time of the algorithm grows linearly with the size of the input.

In case of heap sort, we use data structure called heap. We generally use either max heap or min heap. Max heap means we have maximum element on root node. Min heap means we have minimum element on root. In both cases, the time complexity is O(N log N). The reason for this is that we need to perform a series of operations, such as building and maintaining the heap, which takes linear time with respect to the size of the input.

**Selection Sort**

Selection sort is another popular sorting algorithm. It works by repeatedly finding the minimum element from the unsorted part of the array and swapping it with the first unsorted element. The process continues until the entire array is sorted.

In all three cases, we have a time complexity of O(N^2), which means that the running time of the algorithm grows quadratically with the size of the input. This is because selection sort has to iterate through each element in the array multiple times.

**Complete Binary Tree (CBT)**

A complete binary tree is a data structure where every level, except possibly the last, is completely filled, and all nodes are as far left as possible. The height of a CBT can be found using the formula log N, where N is the number of elements in the tree.

In the case of heap sort, we use a complete binary tree to construct the heap. The time complexity of constructing the heap is O(N log N), which means that it takes logarithmic time with respect to the size of the input.

**Insertion into Heap**

When inserting an element into a heap, the algorithm starts from the root node and works its way down to the leaf nodes. If the heap is a max heap, the maximum element must be at the root, so we can simply insert the new element as the last child of the root. However, if the heap is a min heap, the minimum element must be at the root, so we need to find the correct position for the new element.

The time complexity of inserting one element into a heap is O(log N), where N is the size of the input. This is because we have to traverse log N levels in the tree to reach the leaf node where the new element will be inserted.

However, if we are inserting N elements, the time complexity becomes O(N log N). This is because we need to repeat the insertion process for each element, resulting in a total of N times the logarithmic factor.

**Heap**

A heap is a specialized tree-based data structure that satisfies the heap property: the parent node is either greater than (in a max heap) or less than (in a min heap) its child nodes. Heaps are used to implement priority queues, where elements are ordered based on their priority.

The time complexity of inserting one element into a heap is O(log N), which means that it takes logarithmic time with respect to the size of the input. This is because we have to traverse log N levels in the tree to reach the leaf node where the new element will be inserted.

However, if we are inserting N elements, the time complexity becomes O(N log N). This is because we need to repeat the insertion process for each element, resulting in a total of N times the logarithmic factor.

**Huffman Coding**

Huffman coding is a variable-length prefix code that assigns shorter codes to more frequently occurring symbols. The time complexity of Huffman coding is O(N log N), which means that it takes linear time with respect to the size of the input.

Although there are no many complexities, numerical complexity is one of them. Numerical complexities are used to solve equations or systems of equations involving variables and constants.

**Prims and Kruskal Algorithm**

Prims algorithm is a graph traversal algorithm that finds the minimum spanning tree of a connected weighted graph. The time complexity of Prims algorithm is O(N^2), where N is the number of vertices in the graph.

Kruskal's algorithm, on the other hand, is another popular algorithm for finding the minimum spanning tree of a connected weighted graph. However, Kruskal's algorithm has a higher time complexity of O(E log E), where E is the number of edges in the graph.

**Depth-First Search (DFS) and Breadth-First Search (BFS)**

DFS and BFS are two popular graph traversal algorithms that visit nodes in a graph or tree. DFS traverses the graph depth-first, starting from a given node and exploring as far as possible before backtracking. BFS traverses the graph breadth-first, visiting all nodes at a given depth level before moving on to the next level.

The time complexity of DFS is O(N + E), where N is the number of vertices in the graph and E is the number of edges.

The time complexity of BFS is also O(N + E).

**Complete Binary Tree (CBT) and Time Complexity**

A complete binary tree is a data structure where every level, except possibly the last, is completely filled, and all nodes are as far left as possible. The height of a CBT can be found using the formula log N, where N is the number of elements in the tree.

In the case of heap sort, we use a complete binary tree to construct the heap. The time complexity of constructing the heap is O(N log N), which means that it takes logarithmic time with respect to the size of the input.

**Insertion into Heap**

When inserting an element into a heap, the algorithm starts from the root node and works its way down to the leaf nodes. If the heap is a max heap, the maximum element must be at the root, so we can simply insert the new element as the last child of the root. However, if the heap is a min heap, the minimum element must be at the root, so we need to find the correct position for the new element.

The time complexity of inserting one element into a heap is O(log N), where N is the size of the input. This is because we have to traverse log N levels in the tree to reach the leaf node where the new element will be inserted.

However, if we are inserting N elements, the time complexity becomes O(N log N). This is because we need to repeat the insertion process for each element, resulting in a total of N times the logarithmic factor.

**Heap and Time Complexity**

A heap is a specialized tree-based data structure that satisfies the heap property: the parent node is either greater than (in a max heap) or less than (in a min heap) its child nodes. Heaps are used to implement priority queues, where elements are ordered based on their priority.

The time complexity of inserting one element into a heap is O(log N), which means that it takes logarithmic time with respect to the size of the input. This is because we have to traverse log N levels in the tree to reach the leaf node where the new element will be inserted.

However, if we are inserting N elements, the time complexity becomes O(N log N). This is because we need to repeat the insertion process for each element, resulting in a total of N times the logarithmic factor.

**Huffman Coding and Time Complexity**

Huffman coding is a variable-length prefix code that assigns shorter codes to more frequently occurring symbols. The time complexity of Huffman coding is O(N log N), which means that it takes linear time with respect to the size of the input.

Although there are no many complexities, numerical complexity is one of them. Numerical complexities are used to solve equations or systems of equations involving variables and constants.

**Prims Algorithm and Time Complexity**

Prims algorithm is a graph traversal algorithm that finds the minimum spanning tree of a connected weighted graph. The time complexity of Prims algorithm is O(N^2), where N is the number of vertices in the graph.

However, Kruskal's algorithm has a higher time complexity of O(E log E), where E is the number of edges in the graph.

**Depth-First Search (DFS) and Breadth-First Search (BFS) Algorithm**

DFS and BFS are two popular graph traversal algorithms that visit nodes in a graph or tree. DFS traverses the graph depth-first, starting from a given node and exploring as far as possible before backtracking. BFS traverses the graph breadth-first, visiting all nodes at a given depth level before moving on to the next level.

The time complexity of DFS is O(N + E), where N is the number of vertices in the graph and E is the number of edges.

The time complexity of BFS is also O(N + E).

"WEBVTTKind: captionsLanguage: enHello friends, welcome to Gate SmashersActually many students had requestedthat you make a video on timecomplexity of different algorithmsSo in this video I will show maximumtime complexity of all sorting and searching algorithmsBest case, average case and worst caseSo this video will help you,specially if I say this video then you can say it likeThis is the last minute notesIn last minute sometimes in Gate, UGC net and KVS examsSometimes direct question arisesthat what is the time complexity of this particular algorithmAnd there are 4 options in itSo if I start deriving this type of question thereI have to take maximum to maximum 10 seconds timeSo that I can save time in other questionsin which I have to solve numericalSo here in this video I am discussingmaximum time complexity of all algorithmsSo note all these complexities and beforeyour exam, read those complexitiesSo here first of all if we talk about binary searchSo why do we use binary search algorithm for searchingBecause name itself tells us tosearch an element into an arrayMeans if I want to search elements from an array or listSo there we use binary search and thecomplexity of binary search is log nAnd in which case this complexity is?This complexity is in average case and worst caseIf we talk about best caseSo what is the meaning of best case guys?Best case generally comes lessGenerally they ask about average or worst caseBut if we talk about binary searchSo when do we apply binary search?When elements are in sorted formMeans elements are already in the sorted formSo if I am searching element number 1So to search element number 1 howmany comparisons I will have to doI don't have to run the algorithm becausethe first element is the one I am searchingSo in this case we can say that this is what order of 1Means it didn't take any time in order of 1In order of 1 means there is big O notationWe mostly use big O notationAlthough here we have omega and theta notationBut big O notation means at most, means maximum valueSo that the other values are inside itBig O means whatever answer willcome will lie somewhere inside itSo that's why we write it in maximum big O notationSo average case or worst case timecomplexity of binary search is what?Log N and base is here 2Reason is that it divides the problem in 2 partsIt works in further 2 parts in this waySo here we are not discussing its working or its procedureOnly time complexity is important for youNext if we talk about sequential searchWhen do we apply sequential search?We apply when we have data orelements are not sorted in the listMeans elements are in any positionSo in this case, let's say if I havedata 10, 9, 14, 6, 28 like thatSo I can't apply binary search on this type of dataBecause binary search will apply only if the data is already sortedAscending or descendingSo what will we apply in this case?SequentialSo in the case of sequential what is the time complexity?Order of NWhich you can say is worst case time complexityBest case you can say in this also order of 1Because the element you are searching is found in the first positionSo order of 1Or if we talk about averageAverage time complexity of this algorithm will become N by 2We write journey N by 2Means neither first nor lastMeans neither 1 nor NN by 2But guys if we write N by 2 in the form of orderThen what will this become?Order of NSo time complexity of sequential searchOrder of or big of NThen quick sortQuick sort is a question that comes many times on complexityIf we talk about quick sort's best case or average caseWhat is the complexity in best case or average case?N log NBut quick sort is such an algorithmWhose complexity changes in worst caseIn worst case its complexity is order of N squareAnd when do we make the worst case in quick sort?When our data is in sorted order 1,2,3,4,5And we took 1 as the pivot elementIf we take 1 as the pivot elementSo 1 is in its positionPivot element came in its positionIt is already in its positionSo we have to put quick sort again on N-1 elementThen we took pivot 2So when we take pivot againIt is always leaving N-1,N-2 elements behindSo that's why complexity becomes hereThe order of N squareRemember this pointThis is a very important pointIn quick sort's caseWhat is the worst case?Big of N squareIf we talk about merge sortIn merge sort in all casesThe best, average, worst cases have complexityN log NNext if we talk about insertion sortInsertion sort is also important in terms of questionBecause if we talk about complexity of insertion sortThen average case and worst caseIn average case and worst caseIts complexity is order of N squareBut in best case its complexity is order of NThis is again very important pointHow is order of N?what insertion sort actually does isThe element which we are searchingOr the shortest elementIt will reach its positionSo what we do in thisFirst we arrange the elements in the sorted formAfter arrangingLike what we do in the cardsFirst we arrange all the elementsAnd then the card we want to putWe know that if I want to put 9th cardThen we will fit 9 directly between 8 and 10But before fittingWhat we did isWe arranged all the cards in my handArranged in sorted formSo for that arrangementWe take time for order of N squareBut if your data is already arrangedIf your data is already sortedThen you don't need to put order of N squareOnly order of N time will takeTo put the elements in their respective placesSo in this caseIn the best case of insertion sortTime complexity is order of NIn rest cases order of N squareBubble sortIn all the casesThe complexity is order of N squareThen we have the heap sortIn case of heap sortWe use data structure heapAnd we generally use either max heap or min heapMax heap meansWe have maximum element on root nodeMin heap meansWe have minimum element on rootSo in this case alsoWhat is the time complexity?Order of N log NIn all cases we have order of N log NTalking about selection sortSo here also in all three casesWhich case is best, average and worst caseThen height of CBTHeight of CBT meansHeight of complete binary treeWhat is complete binary tree?Complete binary tree meansIn all the levelsMeans in all the nodes we haveAnd if we want to insert any nodeThen we insert it left to rightIn this case we insert node left to rightSo if we search height of complete binary treeThen what is the formula of height of complete binary tree?Log NAnd here N is number of elementsIn every case if we talk about NN is number of elementsHere we can say number of valuesSo order of log NThen insertion into the heapIf we are inserting one element in heapThen its time takes order of 1For single elementBut if we are inserting N elementsThen what we have to doWhat is the property of heap?If we are making max heapThen maximum element should be on rootWe inserted one elementSo order of 1 took timeBut after that we have to arrange heapArrange meansAfter arranging itWe will bring maximum element on rootSo in that we take timeLog NSo that's why if we talkThat if I want to construct heapThen how much time will it take? N log NHow?Complexity of inserting one element isLog NComplexity of inserting N elements isN log NSo the time to construct heap isN log NAnd from that N log N heapWhat will we do?We will delete the elementWhy? Because maximum element came on rootSo we deleted maximum elementSo that will be deleted in order of 1But after that we have to arrangeBecause in a way it is like thisIf we have maximum element at the rootLet's say 100 maximum element at rootWe removed 100So we removed this nodeAfter removing this nodeI have to bring a node hereAnd arrange itSo the cost of deletion isLog NSo you can say the total time to delete one elementAfter arrangement isLog NSo how much time will it take to remove N elements?N log NThat's why the complexity of heap sort isN log NBecause if you remove one element then it will take log N timeYou will remove all elements one by oneAnd it will always come in sorted formIn the case of max heap you can sayWhat will be the sorting?DescendingAnd in case of min heapWhat you can say isData will come in ascending orderSo what will be the complexity total?N log NNext if we talk about Huffman codingSo the complexity of Huffman coding isN log NAlthough there is no much complexityOn Huffman codingThere are more chances of numericalSo you can do numericalBut in case if Huffman's simpleTime complexity is asked in all the casesThen you can write N log NThen Prims and Kruskal algorithmWe use this to find shortest pathTo find shortest distanceSo if we talk about Prims and KruskalIn the case of PrimsIn Kruskal order of N square is complexityIf we are using matrixIf we are using heapThen order of V plus E into log VWhat is V here?Number of vertex in the graphAnd what is E?Number of edges in the graphSo this is the complexitySo in Kruskal's caseIf we talkIn Kruskal's caseOrder of E log EMeans number of edges log number of edgesThen DFS and BFSDepth First SearchAnd breadth first search algorithmsTheir complexity is order of V plus EAgain what is V and E?Number of vertex in the graphE means number of edges in the graphAll pair shortest pathWe can call all pair shortest path asFloyd Warshall methodSo what is the complexity of Floyd Warshall method?All pairMeans in every pair we find shortest pathOrder of N cubeSo here this complexity is the mostOrder of N cube till nowThen Dijkstra algorithmIn Dijkstra algorithm we use single sourceThis is all pair shortestAnd this is single pair shortest pathMeans from one element we want to go to all destinationsSo what is the minimum timeAnd what is the minimum pathSo to find thatWhat is the complexity order of V squareSo these are all the basic complexitiesBest, average and worst case time complexity of the algorithmIn which I have already told youThat you have to remember the important pointsSo all these complexities should be on tipsDo not derive thereAnswer it according to direct questionThank You.\n"