Chapter 4: Transactions
- READ COMMITTED: These locks prevent another transaction from modifying that data while the query is in progress, but do not block other readers. In READ COMMITTED mode, shared read locks are released as soon as a query completes, so data modification transactions can proceed at that point, even if the transaction to which the query belongs is still open. Therefore, non-repeatable reads are possible; if the same data is read twice in the same transaction, the answer may be different. If this is not acceptable, the isolation level can be made more restrictive. The REPEATABLE READ level will ensure that all the rows that were read cannot be modified or deleted until the transaction which reads them completes. However, even this level does not prevent new rows (called phantom rows) being inserted that satisfy the query criteria, meaning that reads are not, in the true sense, repeatable. To prevent this, you could switch to SERIALIZABLE, the most restrictive isolation level of all, which basically ensures that a transaction is completely isolated from the effects of any other transaction in the database … P117-118.
- The sys.dm_tran_locks DMV provides information regarding both the resource on which the lock is being held (or has been requested), and the owner of the request. Description of the useful columns in this DMV are listed in P120-122.
- Lock types: SQL Server can lock a number of different types of resource, the most obvious being tables (OBJECT locks), pages (PAGE locks), rows (RID locks), and keys (KEY locks) (other common types in P124), in order of increasing granularity. The more granular the lock, the higher the degree of concurrent access that can be supported. However, with that comes a higher memory overhead, from having to manage a large number of individual locks. SQL Server automatically chooses locks of the highest possible granularity, suitable for the given workload. However, if too many individual locks are being held, SQL Server may use lock escalation to reduce the total number of locks being held. While this will result in lower overhead on SQL Server, the cost will be lower concurrency. If processes are running on your servers that are causing lock escalation, it's worth investigating whether the escalation is justified, or if SQL tuning can be performed to prevent it … P123.
- SQL Server employs five different lock modes, in order to control how concurrent transactions interact. These modes are exposed in the request_mode column of the sys.dm_tran_locks DMV. Lock modes are described in P125.
- Listing 4.5: Which sessions are causing blocking and what statement are they running. A better one and adds to listing 4.5 is Listing 4.6, it allows investigating locking and blocking based on waiting tasks [need to be fixed: replace “2 AS [blocked_command]” with “2) AS [blocked_command]”]. You can add to resulting columns and get the name of the database by “db_name(DTL.[resource_database_id]) DatabaseName”.
- Listing 4.14: identifies all CURRENT ACTIVE transactions and their physical effect on the databases' transaction logs. This is especially useful when seeking out transactions that may be causing explosive transaction log growth.
- Snapshot Isolation and the tempdb Version Store: Each time a row is modified in a database running under snapshot isolation, a version of the row from prior to the modification is stored within tempdb in a version store. Read transactions targeting the affected rows will use the row version(s) from the tempdb, while the writing transaction will modify the actual table data. Update locks are issued for writes, and when a read transaction encounters such a lock, it is diverted to the version store. This ensures that write transactions do not block reads nor the inverse. … P148
- Snapshot isolation introduces two new modes of operation:
- SNAPSHOT mode isolates read transactions from modifications that committed after the transaction began
- READ_COMMITTED_SNAPSHOT mode isolates read transactions from modifications which committed after the current statement began…. P149 – 151 show how to enable and initiate the two modes.
- P151 the writer shows how to “Investigating snapshot isolation”. He shows how to use sys.dm_tran_active_snapshot_database_transactions, sys.dm_tran_currrent_snapshot, and sys.dm_tran_transactions_snapshot to get the “current snapshot activity”, then how to use sys.dm_tran_version_store, and sys.dm_tran_top_version_generators to get “Version store usage”.
Reference: http://www.amazon.com/Performance-Tuning-Server-Dynamic-Management/dp/1906434476
Chapter 8: Large or Full Transaction Log
- The fact that writes are always sequential also means that SQL Server will only ever write to one transaction log file at a time. There is therefore no advantage, in terms of log writing performance, to having multiple transaction log files. The only reason to have more than one log file is if space considerations dictate the need for multiple log files, on different disks, in order to achieve the necessary log size for a database … P276.
- The allocated space inside of a transaction log file is internally divided into smaller segments known as virtual log files (VLFs), and the process of log truncation is simply the act of marking a VLF as "inactive" and so making the space in that VLF available for reuse.
- In the SIMPLE recovery model, database log truncation (the mechanism through which SQL Server marks the space inside of the transaction log as available for reuse by the database.) can occur immediately upon CHECKPOINT In the FULL (or BULK LOGGED) recovery model, once a full backup of the database has been taken, the inactive portion of the log is not marked as reusable on CHECKPOINT, because it is necessary to maintain a complete LSN chain, and so allow point-in-time recovery of the database. Truncation can only occur upon a BACKUP LOG operation … P277-278.
- It is very important to size the log appropriately, and try to avoid:
- A very high number of small VLFs, known as log file fragmentation, can have a considerable impact on performance.
- Conversely, if the database has only a few VLFs which are large in size, this can lead to problems related to rapid log growth in cases where truncation is delayed for some reason.
- Third reason: for log files, each growth event is a relatively expensive operation … P279.
- Index rebuilds use a lot more space in the log, but index rebuilds can be minimally logged by temporarily switching the database to run in BULK LOGGED recovery mode (in BULK LOGGED model, the LSN chain is maintained, not like SIMPLE model the LSN chain will be automatically broken). Note that the temporary change in to BULK LOGGED will still allow you to restore to point in time, but NOT in the period of the minimal logged operations, the BULK LOGGED period … P281.
- In contrast to rebuilding an index, reorganizing (defragmenting) an index, using ALTER INDEX REORGANIZE is always a fully-logged operation, regardless of the recovery model. However, index reorganizations generally require less log space than an index rebuild … P283.
- Generally, for fragmentation levels greater than 5 percent but less than or equal to 30 percent, you should reorganize the index, and for fragmentation levels greater than 30 percent, you should rebuild it … P284.
- If you suspect log growth is being caused by the log space not being reused, and to find out what's preventing reuse. Querying sys.databases, as shown in Listing 8.1. The value of the log_reuse_wait_desc column will show the current reason why log space cannot be reused ... P285.
- If the value returned for log_reuse_wait_desc, from Listing 8.1 query, is Log Backup, then you are suffering from one of the most common causes of a full or large transaction log, namely operating a database in the FULL recovery model, without taking transaction log backups (and even if a full backup is being taken, as this will: ❶Only protects the contents of the data file not the log file, which will be required for point-in-time restores. ❷Full database backups do not truncate the transaction log, only a log backup will do)... P286.
- If the value returned for log_reuse_wait_desc is ACTIVE_TRANSACTION, then you are suffering from the second most common cause of a full or large transaction log in SQL Server: long-running or uncommitted transactions … P287-292.
- To identify whether transaction log growth is being caused by an orphaned (or just long-running) transaction is to use OPENTRAN(DatabaseName). The result will show only the oldest active transaction. In the result look first for the value of Start Time which generally if it is a long time it is a sign of uncommitted transaction. And also use the SPID withsys.dm_exec_sessions and sys.dm_exec_connections DMVs, to determine whether the transaction is actually an orphaned transaction or just a long-running one (as in Listing 8.5), If the SPID is in a runnable, running, or suspended status, then it is likely that the source of the problem is a long-running, rather than orphaned, transaction. sys.dm_tran_session_transactions and sys.dm_tran_database_transactions DMVs can be used to gather as in Listing 8.6, if in its result the open transaction was created before the last request start time, it is likely to be an orphaned transaction. The only solution to those orphaned transaction is KILL the session … P293-295.
- Other possible causes of log growth could be: slow or delayed log reader activity which can lead to log entries being left marked as "pending replication" (instead of "replicated") for long periods so the parent VLF cannot be truncated … P296-297.
- When the log_reuse_wait_desc column shows ACTIVE_BACKUP_OR_RESTORE as the current wait description, a long-running full or differential backup of the database is the most likely cause of the log reuse problems. A solution to this problem is to optimize the backup process or improving the performance of the underlying disk I/O system … P298.
- When the log_reuse_wait_desc column shows DATABASE_MIRRORING, as the current wait description, synchronous database mirroring operations may be the cause of the log reuse issues … P298.
- Error 9002, the transaction log full error, and the database is read-only. Solutions in sequence:
- Run the query in Listing 8.1 and if the value for the log_reuse_wait_desc column is Log Backup then a lack of log backups is the likely cause of the issue. Run the query in Listing 8.7 to confirm.
- If, for some reason, it is not possible to perform a log backup, switching _temporarily_ the database to the SIMPLE recovery model.
- If the second option is not possible, add an addition log file on a different disk array.
- Use DBCC SHRINKFILE and specify a target_size to which to shrink the log file, or you can specify 0 (zero) as the target size and shrink the log to its smallest possible size, and then immediately resize it to a sensible size using ALTER DATABASE … P299-301.
Mismanagement or What Not To Do
- Do not detach database then delete log file to create a new clean log file, as it can result in the database failing to start, leaving it in the RECOVERY_PENDING state … P302.
- Do not use BACKUP LOG WITH TRUNCATE_ONLY (in SQL 2000), nor BACKUP LOG TO DISK=‘NUL’ (in SQL 2008). The right way to "force" log truncation is to temporarily switch the database into the SIMPLE recovery model, as discussed earlier … P303.
- Transaction log should never be shrunk using DBCC SHRINKFILE, or a database maintenance plan step to shrink the database, as part of normal, scheduled maintenance operations. The reason for this is that every time you shrink the log, it will need to immediately grow again to store log records for subsequent transactions and every log. First, that log grow operation is expensive. Second, it will result a log file fragmentation, and both may impact the performance … P304.
- As a rule of thumb, the initial size of the transaction log should be set to 1.5 times the size of the largest index or table in the database, to allow for logging requirements to rebuild the index under FULL recovery. Also it is important to monitor and adjust the size of the transaction log periodically to fit the size of the database as it grows … P305.
- Auto-growth should be configured as a security net only, allowing the system to automatically grow the log file when you are unable to respond manually; as there are problems with the auto-growth settings that a database inherits from model: ❶causes fragmentation ❷growth of large log files can take time and if the log can't be grown fast enough this can result in 9002 (transaction log full) errors and even in the auto-growth timing out and being rolled back … P306.