In the context of universal hashing, what makes a family of hash functions 'universal'?
The property that the probability of collision between any two keys is bounded
Its ability to adapt to any data distribution
Its use of a single, universally applicable hash function
The guarantee of zero collisions for any input set
In a web server, which scenario is best suited for using a hashmap to optimize performance?
Storing and retrieving static website content like images and CSS files
Storing and retrieving user session data
Managing the order of user connections to ensure fairness
Maintaining a log of all incoming requests in chronological order
What is the primary advantage of using a hashmap over a simple array for storing and retrieving data?
Hashmaps use less memory than arrays.
Hashmaps provide faster access to data based on a key, while arrays require linear search in some cases.
Hashmaps maintain data in sorted order, unlike arrays.
Hashmaps can store duplicate keys, while arrays cannot.
What is the purpose of dynamic resizing (rehashing) in a hashmap?
To improve the efficiency of key deletion operations.
To reduce the number of keys stored in the hashmap.
To increase the size of the hash function's output range.
To maintain a low load factor and prevent performance degradation.
When choosing a collision resolution strategy for a hash table, which factors are essential to consider?
Size of the keys and values being stored
Expected data distribution and load factor
Programming language and hardware architecture
All of the above
You are implementing an LRU (Least Recently Used) cache. Which data structure, in conjunction with a hashmap, is most suitable for tracking the usage order of cached items?
Doubly Linked List
Queue
Binary Tree
Stack
How does universal hashing enhance the robustness of hash tables?
By dynamically adjusting the hash function to the input data
By eliminating the possibility of hash collisions entirely
By ensuring a uniform distribution of keys across the hash table
By minimizing the impact of hash collisions on retrieval time
In the worst-case scenario, what is the time complexity of searching for a key in a hashmap?
O(log n)
O(n)
O(1)
O(n log n)
In a system where memory usage is a major concern, what trade-off should be considered when using a hashmap?
Using a complex hash function always reduces collisions and memory usage.
Collision resolution strategies have no impact on memory consumption.
A larger hash table size generally results in faster lookups but consumes more memory.
Hashmaps always use less memory than arrays for storing the same data.
You need to count the frequency of each word in a large text document. Which combination of data structures would be most efficient for this task?
A hashmap where words are keys and their frequencies are values
A sorted linked list where each node contains a word and its frequency
A binary tree where words are stored in the nodes and their frequencies are stored in the leaves
Two arrays: one for storing words and one for storing their frequencies