Skip to content
Related Articles
Get the best out of our app
Open App

Related Articles

Implementation of Locking in DBMS

Improve Article
Save Article
Like Article
Improve Article
Save Article
Like Article

Locking protocols are used in database management systems as a means of concurrency control. Multiple transactions may request a lock on a data item simultaneously. Hence, we require a mechanism to manage the locking requests made by transactions. Such a mechanism is called as Lock Manager. It relies on the process of message passing where transactions and lock manager exchange messages to handle the locking and unlocking of data items. Data structure used in Lock Manager – The data structure required for implementation of locking is called as Lock table.

  1. It is a hash table where name of data items are used as hashing index.
  2. Each locked data item has a linked list associated with it.
  3. Every node in the linked list represents the transaction which requested for lock, mode of lock requested (mutual/exclusive) and current status of the request (granted/waiting).
  4. Every new lock request for the data item will be added in the end of linked list as a new node.
  5. Collisions in hash table are handled by technique of separate chaining.

Consider the following example of lock table: Explanation: In the above figure, the locked data items present in lock table are 5, 47, 167 and 15. The transactions which have requested for lock have been represented by a linked list shown below them using a downward arrow. Each node in linked list has the name of transaction which has requested the data item like T33, T1, T27 etc. The colour of node represents the status i.e. whether lock has been granted or waiting. Note that a collision has occurred for data item 5 and 47. It has been resolved by separate chaining where each data item belongs to a linked list. The data item is acting as header for linked list containing the locking request. Working of Lock Manager –

  1. Initially the lock table is empty as no data item is locked.
  2. Whenever lock manager receives a lock request from a transaction Ti on a particular data item Qi following cases may arise:
    • If Qi is not already locked, a linked list will be created and lock will be granted to the requesting transaction Ti.
    • If the data item is already locked, a new node will be added at the end of its linked list containing the information about request made by Ti.
  3. If the lock mode requested by Ti is compatible with lock mode of transaction currently having the lock, Ti will acquire the lock too and status will be changed to ‘granted’. Else, status of Ti’s lock will be ‘waiting’.
  4. If a transaction Ti wants to unlock the data item it is currently holding, it will send an unlock request to the lock manager. The lock manager will delete Ti’s node from this linked list. Lock will be granted to the next transaction in the list.
  5. Sometimes transaction Ti may have to be aborted. In such a case all the waiting request made by Ti will be deleted from the linked lists present in lock table. Once abortion is complete, locks held by Ti will also be released.


Data Consistency: Locking can help ensure data consistency by preventing multiple users from modifying the same data simultaneously. By controlling access to shared resources, locking can help prevent data conflicts and ensure that the database remains in a consistent state.

Isolation: Locking can ensure that transactions are executed in isolation from other transactions, preventing interference between transactions and reducing the risk of data inconsistencies.

Granularity: Locking can be implemented at different levels of granularity, allowing for more precise control over shared resources. For example, row-level locking can be used to lock individual rows in a table, while table-level locking can be used to lock entire tables.

Availability: Locking can help ensure the availability of shared resources by preventing users from monopolizing resources or causing resource starvation.


Overhead: Locking requires additional overhead, such as acquiring and releasing locks on shared resources. This overhead can lead to slower performance and increased resource consumption, particularly in systems with high levels of concurrency.

Deadlocks: Deadlocks can occur when two or more transactions are waiting for each other to release resources, causing a circular dependency that can prevent any of the transactions from completing. Deadlocks can be difficult to detect and resolve, and can result in reduced throughput and increased latency.

Reduced Concurrency: Locking can limit the number of users or applications that can access the database simultaneously. This can lead to reduced concurrency and slower performance in systems with high levels of concurrency.

Complexity: Implementing locking can be complex, particularly in distributed systems or in systems with complex transactional logic. This complexity can lead to increased development and maintenance costs.

Reference – Database system Concepts, 6th edition

My Personal Notes arrow_drop_up
Last Updated : 08 May, 2023
Like Article
Save Article
Similar Reads