Counting sort is often used as a subroutine in which other sorting algorithm?
Heap Sort
Quick Sort
Radix Sort
Merge Sort
Why is Merge Sort often preferred over simpler algorithms like Bubble Sort for large datasets?
Merge Sort's time complexity scales better with increasing data size.
Merge Sort has a lower space complexity than Bubble Sort.
Merge Sort's recursive approach is easier to implement.
Merge Sort is an in-place sorting algorithm, while Bubble Sort is not.
How does the time complexity of Radix Sort compare to comparison-based sorting algorithms like Merge Sort and Quick Sort for integers with a wide range?
Radix Sort has the same time complexity
Radix Sort is consistently faster
Radix Sort can be faster under certain conditions
Radix Sort is always slower
In which scenario is Bucket Sort likely to perform poorly?
Data is uniformly distributed within a known range
Data is heavily skewed towards one end of the range
Data is already sorted in reverse order
Data consists of a small number of unique elements
How does Kruskal's algorithm utilize sorting to find the minimum spanning tree of a graph?
It sorts the nodes of the graph based on their distances from a randomly chosen starting node.
Sorting is not used in Kruskal's algorithm; it's a greedy algorithm that makes locally optimal choices without the need for sorting.
It sorts the nodes of the graph in ascending order of their degrees (number of connected edges).
It sorts the edges of the graph in increasing order of their weights and then iteratively adds edges to the growing minimum spanning tree while avoiding the formation of cycles.
What is the primary advantage of using counting sort over comparison-based sorting algorithms like merge sort or quick sort?
Counting sort can achieve a time complexity better than O(n log n) in certain scenarios.
Counting sort is an in-place sorting algorithm.
Counting sort works efficiently even for large datasets with a wide range of values.
Counting sort is a stable sorting algorithm by default.
What is the worst-case time complexity of Quick Sort and when does it occur?
O(n^2), when the pivot is always the median element
O(n log n), when the input array is sorted or reverse sorted
O(n log n), when the pivot is always the median element
O(n^2), when the input array is already sorted or reverse sorted
Why is Quick Sort often preferred over Merge Sort in practice, despite having the same average-case time complexity?
Quick Sort has a lower constant factor in its time complexity, making it faster for smaller datasets
Quick Sort is easier to parallelize and implement on multi-core processors
Quick Sort is an in-place sorting algorithm, while Merge Sort requires additional space for merging
Quick Sort is more memory-efficient due to its recursive nature
Which of the following is a key advantage of Merge Sort?
Stable sorting
In-place sorting
Best-case time complexity of O(n)
Constant space complexity
Is counting sort inherently stable?
No, counting sort is inherently unstable.
Yes, counting sort is always stable.
The stability of counting sort depends on the input data.
Counting sort can be made stable with modifications to the algorithm.