Latest Posts

Natively compiled user-defined functions

One new thing that SQL Server 2016 has added is the ability to natively compile user-defined functions. Previously, native compilation, part of the hekaton feature, was limited to stored procedures.

When I saw that, the first question that came to mind is whether natively compiling a scalar function reduces the overhead when calling that function within another query. I’m not talking about data-accessing scalar UDFs, since natively compiled functions can only access in-memory tables, but functions that do simple manipulation of the parameters passed in. String formatting, for example, or date manipulation.

While not as harmful as data-accessing scalar UDFs, there’s still overhead as these are not inline functions, they’re called for each row in the resultset (as a look at the Stored Procedure Completed XE event would show), and the call to the function takes time. Admittedly not a lot of time, but when it’s on each row of a large resultset the total can be noticeable.

I decided to do a simple test. A query against a table with ~600k rows, one query with a traditional scalar function, one with a natively compiled function and one with the function’s contents in the query as a column.

The two functions:

CREATE FUNCTION dbo.DateOnly (@Input DATETIME)
RETURNS DATETIME
AS
BEGIN
  RETURN DATEADD(dd, DATEDIFF (dd, 0, @Input), 0);
END
GO

and

CREATE FUNCTION dbo.DateOnlyHekaton (@Input DATETIME)
RETURNS DATETIME
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English')
  RETURN DATEADD(dd,DATEDIFF(dd,0,@Input),0);
END
GO

And the query without the function would be a simple

SELECT DATEADD(dd, DATEDIFF(dd, 0, SomeDate), 0)
FROM SomeTable;

Durations and CPU usage were caught with Extended Events. I ran each query 25 times and aggregated the results.

Average CPU (ms) Average Duration (ms)
In-line expression 289 294
Normal function 3555 3814
Natively Compiled Function 3318 3352

Not quite what I hoped. While the overhead of the natively compiled function is lower, it’s lower only by about 10%, which really is not worth it, now when we’re talking about an order of magnitude difference from the query without the function call.

Looks like the guidance is still going to be to not use scalar UDFs within other queries.

Book Review 2015

It’s been a few years since I wrote a review of books I’ve read, so this isn’t going to list everything I’ve read since 2012, just a few of the best books or series. For the entire list of all books, see the library page.

So, 2015. I set a goal of 50 (new) books to read in the year, same goal I’d set the previous few years. I read 70 in total. This doesn’t include re-reads of old favourites, just new books. For 2016 I’m going to set a goal of 75, that’s an average of 6.25 books a month, or 1.44 books a week.

Books2015Some of the best books or series of books I’ve read in the last year are:

Hounded (Iron Druid Chronicles)

Author: Kevin Hearne

This is a brilliant series. Atticus O’Sullivan is a druid, born during the Roman occupation of Britain and currently living in Arizona, running a second hand book shop. He’s hiding from a few deities (and other supernatural creatures) who he’s annoyed over the centuries, but his quiet life is about to come to an end.

Good character, good plotting, great humour. The series is a fantastic read.

Amazon link: Hounded

Dauntless (The Lost Fleet)

Author: Jack Campbell

Amazon link: Dauntless

Rivers of London

Probationary Constable Peter Grant is just trying to do his job, and avoid a transfer to the Case Progression Unit, when he interviews a murder witness who died a couple hundred years ago. The witness, not the victim. Then his life gets really strange.

This series is so much fun. British humour, magic, the weirdness that is London (and surrounds) and fun characters.

This is the first of a series. The other books are Moon over Soho, Whispers Underground, Broken Homes and Foxglove Summer. Rivers of London was also released under the title Midnight Riot.

Amazon link: Rivers of London

In Her Name: Redemption

Humans have been fighting a century-long war with the Kreegan Empire, and they’re losing. Reza Gard, orphaned at a young age due to the war, is kidnapped by the Kreegans and forced to live among his enemies, but he may hold the key the fulfilment of a 10 000 year old Kreegan prophecy, and to the end of the war.

Excellent science fiction, well plotted, good characters.

Amazon link: Redemption. This is the complete text of the Redemption trilogy, Empire, Confederation and Final Battle. There’s a trilogy that discusses the start of the war, The Last War, comprised of First Contact, Legend of the Sword and Dead Soul, but I didn’t enjoy it quite as much. First Contact is excellent, the other two aren’t quite as good.

Ready Player One

A story of a search through an online virtual world for a massive prize, nothing less than the complete ownership of that virtual world and several billion dollars. Lots and lots of geek references, games, TV shows, movies, etc.

Amazon link: Ready Player One

The Laundry Files

What if magic was real? What if magic could be done by solving certain mathematical equations? What if that meant that computer programmers were actually magicians?

That’s the premise of the Laundry Files. Magic is a branch of applied mathematics, and computers are really good at maths. Unfortunately, such magic tends to have unpleasant consequences, like interdimensional horrors eating your brain. Hence existence of a government department trying to keep magic under control, and put off the end of the world for a few more years.

This is another series set (mostly) in London with more typical British humour.

Amazon Link: The Atrocity Archives.

The Stormlight Archive

Brandon Sanderson did a wonderful job finishing off the Wheel of Time series, now he’s starting his own epic fantasy series, and it’s good.

The world of Roshar is scoured by hurricane-strength storms that cross from the east to west every few days. On this battered world kings and armies fight for power, for wealth and for legendary weapons called Shardblades. But there’s hints from various sources that something terrible is coming, something that will tear the world apart.

Amazon Link: The Way of Kings. Depending on the publisher, the books are either sold in 2 parts or as a single book. The two published titles of the Stormlight Archive are “The Way of Kings” and “Words of Radiance”.

When naming transactions causes an error

For the last part of the series on transactions, I’m going to look at a problem that I ran across entirely by chance while doing some performance tuning. A procedure had a transaction in it, and it was named. Something like this:

CREATE PROCEDURE RemoveUser (@UserID INT)
AS

BEGIN TRY
  BEGIN TRANSACTION Important
    DELETE FROM Users WHERE UserID = @UserID
  COMMIT TRANSACTION Important
END TRY
BEGIN CATCH
  ROLLBACK TRANSACTION Important
END CATCH
GO

The error’s squelched, not handled, but let’s ignore that, it’s not the point of this post. The example as written doesn’t need a transaction, but the real procedure was a tad more complex. The transaction is named, and the name is specified in the COMMIT and ROLLBACK as well.

Before I go into the problem with this code, I’m going to quote from three Books Online entries.

BEGIN TRANSACTION

transaction_name

Is the name assigned to the transaction. transaction_name must conform to the rules for identifiers, but identifiers longer than 32 characters are not allowed. Use transaction names only on the outermost pair of nested BEGIN…COMMIT or BEGIN…ROLLBACK statements. transaction_name is always case sensitive, even when the instance of SQL Server is not case sensitive.

COMMIT TRANSACTION

transaction_name

Is ignored by the SQL Server Database Engine. transaction_name specifies a transaction name assigned by a previous BEGIN TRANSACTION. transaction_name must conform to the rules for identifiers, but cannot exceed 32 characters. transaction_name can be used as a readability aid by indicating to programmers which nested BEGIN TRANSACTION the COMMIT TRANSACTION is associated with.

ROLLBACK TRANSACTION

transaction_name

Is the name assigned to the transaction on BEGIN TRANSACTION. transaction_name must conform to the rules for identifiers, but only the first 32 characters of the transaction name are used. When nesting transactions, transaction_name must be the name from the outermost BEGIN TRANSACTION statement. transaction_name is always case sensitive, even when the instance of SQL Server is not case sensitive.

I do love how Books Online refers to nested transactions as though they were real things…

Now that we’re perused the documentation, let’s go back to the code. As-is it runs fine, however let’s try that procedure called from within another transaction.

BEGIN TRANSACTION
  EXEC dbo.RemovePosts @UserID = 152;
  EXEC dbo.RemoveThreads @UserID = 152;
  EXEC dbo.RemoveUser @UserID = 152;
COMMIT

Some sort of ‘remove all my details’ functionality, except that I’ve ‘forgotten’ to check one foreign key relationship and so the delete inside the inner transaction is going to throw an error. Execution is going to be transferred to the CATCH block and the ROLLBACK TRANSACTION is going to run.

Msg 6401, Level 16, State 1, Procedure RemoveUser, Line 10
Cannot roll back Important. No transaction or savepoint of that name was found.

Whoops.

The documentation for ROLLBACK TRANSACTION states ‘When nesting transactions, transaction_name must be the name from the outermost BEGIN TRANSACTION statement.’. In this case a second transaction has been wrapped around the first, the outer transaction had no name and hence an error occurred.

No big deal though, right? It’s just a different error being thrown, right? It’s not as if we were handling and logging the error properly in the first place.

Not quite.

OpenTransaction

The error was thrown by the ROLLBACK statement. As such, the transaction is still open, the locks are held and the transaction log space can’t be reused. Unless the application that called this was checking for open transactions, that transaction could potentially be left open for quite some amount of time, causing blocking and/or the transaction log to grow.

It’s not just that someone in the future might call the code from another stored proc within a transaction, it’s also that it might be that the code is called from an application which started a transaction. Or called from SSIS which started a transaction. It’s very hard to ensure that code is never called from within an existing transaction

In conclusion, if you want to put a name on a BEGIN TRANSACTION, for documentation purposes or to help with debugging, go ahead. If you want to put a name on a COMMIT, that’s fine too, although it has no effect at all on how the code runs. Don’t put names on the ROLLBACK ever, as it leaves the code with a potential hidden bug.

This concludes the short series on transactions. The full list is:

  1. A Mess of Nested Transactions
  2. Savepoints and conditional transactions
  3. Why would you want to name a transaction?
  4. When naming transactions causes an error (this post)

Why would you want to name a transaction?

Something that i keep seeing in documentation, in forum code and in real systems is transactions that have names

BEGIN TRANSACTION VeryImportantProcess

<do stuff>

COMMIT VeryImportantProcess

Now the question came up on the forums a while back as to what the point of this is. Why would you name a transaction, what effect does it have on behaviour and how is it treated internally?

In the first post in this series I had a look at nested transactions and in that post we saw that nested transactions don’t have any effect on how the transactions are treated. The BEGIN TRANSACTION statement isn’t even logged if there’s already an open transaction in that session and a ROLLBACK still rolls everything back (unless it’s specifically rolling back to a savepoint)

First question then regarding naming of transactions is whether it has any effect at all on that behaviour.

CREATE TABLE TestTrans (
ID int,
InsertDate DATETIME
);

BEGIN TRANSACTION OuterTran
  INSERT INTO TestTrans (ID, InsertDate) VALUES (1,GETDATE())
  BEGIN TRANSACTION InnerTran
    INSERT INTO TestTrans (ID, InsertDate) VALUES (2,GETDATE())
  COMMIT TRANSACTION InnerTran
ROLLBACK TRANSACTION OuterTran

SELECT * FROM TestTrans

NoRecords

Clearly none. Even with all the transactions named, a commit of a named transaction did not commit anything until the transaction nest level reached 0. A rollback still rolls back everything.

Ok, so maybe there’s some internal usage for a transaction’s name. Let’s have a look at the transaction log to see.

CHECKPOINT;

BEGIN TRANSACTION OuterTran
  INSERT INTO TestTrans (ID, InsertDate) VALUES (1,GETDATE())
  BEGIN TRANSACTION InnerTran
    INSERT INTO TestTrans (ID, InsertDate) VALUES (2,GETDATE())
  COMMIT TRANSACTION InnerTran
ROLLBACK TRANSACTION OuterTran

SELECT * FROM fn_DBLog(NULL, NULL)

TranLogContents

The outer transaction’s name appears in the log, replacing what was ‘user_transaction’ previously, but other than that there’s no differences. The inner transaction still doesn’t appear anywhere in the log, neither does the inner commit, and the rollback doesn’t reference the transaction name anywhere.

So what conclusion can we come to here? Pretty much that naming of transactions has one real use, as a form of documentation. Instead of putting a comment above an BEGIN TRANSACTION we can give the transaction a name that indicates what the transaction does, That’s about the only real use.

With two exceptions.

There’s a seldom used option on a BEGIN TRANACTION, an optional WITH clause, Using that along with the transaction name can have some uses in exceptional circumstances.

BEGIN TRANSACTION VeryImportantProcess WITH MARK

<do stuff>

COMMIT TRANSACTION VeryImportantProcess

Now if we run an example of that and look into the log, there’s something different there. Interestingly, the difference isn’t on the BEGIN_XACT, it’s on the COMMIT_XACT log record.

2015_10_05_15_55_36_SQLQuery1.sql_EXCALIBUR.TransactionDemo_Excalibur_Gail_54_Microsoft_SQL_

The question still stands though, why do this? Why name a transaction and put a mark of that into the transaction log. What  benefit does it give?

The mark can be used with a rather poorly known and seldom used pair of options on the restore command (when restoring transaction log backups)

  • WITH STOPAFTERMARK
  • WITH STOPBEFOREMARK

These two, similar to the better known STOPAT can be used to stop a transaction log restore part way through a log backup. Unlike STOPAT, they don’t take a time, but rather take a string, a name that was used for a marked transaction name.

Now to be honest there’s not all that much use for this. If a user drops a table or otherwise messes data up requiring a restore to the point before that happened they’re not going to be nice and run that accidental data mess up in a named, marked transaction.

Where it can be useful is when the transaction crossed multiple databases and/or multiple servers. The log mark appears in all of them and can therefore be used to restore multiple databases to a consistent state where that consistent state is not a single point in time.

Still, it’s useful to have in the toolbox just in case some circumstance comes up where it’s needed.

Edit: It was pointed out, both on twitter and in the comments, that adding a name to a transaction reflects in the transaction-related DMVs and potentially makes debugging easier. Absolutely, it can do and it’s a valid reason to name transactions, I’m lumping that in with ‘documentation’ in the discussions here.

There’s another case where naming transactions can affect behaviour and in this case usually in a bad way. I’ll look at in in more detail in part 4 of this series, but for now will just reference the relevant segment in Books Online:

transaction_name Is the name assigned to the transaction on BEGIN TRANSACTION. transaction_name must conform to the rules for identifiers, but only the first 32 characters of the transaction name are used. When nesting transactions, transaction_name must be the name from the outermost BEGIN TRANSACTION statement. transaction_name is always case sensitive, even when the instance of SQL Server is not case sensitive.

The full list is:

  1. A Mess of Nested Transactions
  2. Savepoints and conditional transactions
  3. Why would you want to name a transaction? (This post)
  4. When naming transactions causes an error

Savepoints and conditional transactions

This is the second in a short series on transactions. In the previous part I discussed nested transactions and showed that they actually don’t exist. SQL has syntax which may suggest that nested transactions exist, but it’s just a syntactical lie.

One thing I suggested in that post was that transactions should be started and committed (or rolled back) at one level only.

That’s a nice idea, but it often doesn’t work when dealing with existing systems which may have been developed with less attention to transactions than ideal. What happens when a stored procedure needs to run in a transaction and potentially roll back its changes, but can be called either within an existing transaction or independently?

The answer to that comes in two parts, first the conditional starting of a transaction and secondly save points.

I should mention that this is fairly advanced transaction management, it’s not something I see in production code very often. It should be considered carefully before being used, as if the people supporting the code don’t understand this, there could be trouble.

Let’s start with some setup. Some random tables and a stored procedure which inserts into parent and child tables.

CREATE TABLE Parent (
    SomeID INT IDENTITY PRIMARY KEY,
    SomeRandomValue VARCHAR(50),
    NotificationRequired BIT DEFAULT 0
);

CREATE TABLE Child (
    SomeOtherID INT IDENTITY PRIMARY KEY,
    ParentID INT,
    State INT,
    SomeValue INT
);

CREATE TABLE Notifications (
    ParentID INT,
    NotificationText VARCHAR(1000),
    NotificationDate DATETIME DEFAULT GETDATE()
);

Let’s say an outer procedure which may insert into the parent table, inserts into the child table and then calls another procedure. The other procedure inserts a notification into a table.

CREATE PROCEDURE OuterProc (@SomeValue VARCHAR(50), @SomeOtherValue INT)
AS

DECLARE @ParentID INT;

BEGIN TRY
    BEGIN TRANSACTION

    IF NOT EXISTS ( SELECT  1
                    FROM    dbo.Parent
                    WHERE   SomeRandomValue = @SomeValue )
        BEGIN
            INSERT  INTO dbo.Parent
                    (SomeRandomValue)
            VALUES  (@SomeValue);

            SELECT  @ParentID = @@IDENTITY;
        END
    ELSE
        SELECT  @ParentID = SomeID
        FROM    dbo.Parent
        WHERE   SomeRandomValue = @SomeValue;

    INSERT  INTO dbo.Child
            (ParentID, SomeValue)
    VALUES  (@ParentID, @SomeOtherValue);

    UPDATE  dbo.Parent
    SET     NotificationRequired = 1
    WHERE   SomeID = @ParentID;

    EXEC NotificationProc @ParentID;

    COMMIT TRANSACTION
END TRY
BEGIN CATCH
    ROLLBACK TRANSACTION;
-- other error handling here
END CATCH
GO

There are some potential race conditions in there, but let’s ignore those for now, it’s not the point of this post.

The requirement for the notification procedure is that if it throws an error all of its changes must be undone however the changes in the outer procedure must still commit. The notification procedure can be called independently or from within the OuterProc

The starting code for the notification procedure is:

CREATE PROCEDURE NotificationProc (@AnID INT)
AS
    UPDATE  dbo.Parent
    SET     NotificationRequired = 0
    WHERE   SomeID = @AnID

    INSERT  INTO dbo.Notifications
            (ParentID,
             NotificationText,
             NotificationDate
            )
            SELECT  SomeID,
                    SomeRandomValue,
                    GETDATE()
            FROM    dbo.Parent
            WHERE   SomeID = @AnID;

GO

This currently has no transaction management at all. Let’s start by implementing the first requirement, if called from outside of any transaction, it must begin a transaction and either commit or rollback all changes.

To do this, we can check the value of @@Trancount to see whether a transaction should be started or not.

CREATE PROCEDURE NotificationProc (@AnID INT)
AS
    DECLARE @Independent BIT = 0;
    IF @@TRANCOUNT = 0  -- proc called from outside of any transaction
        SET @Independent = 1;

    BEGIN TRY
        IF @Independent = 1
            BEGIN TRANSACTION;

        UPDATE  dbo.Parent
        SET     NotificationRequired = 0
        WHERE   SomeID = @AnID;

        INSERT  INTO dbo.Notifications
                (ParentID,
                 NotificationText,
                 NotificationDate
                )
                SELECT  SomeID,
                        SomeRandomValue,
                        GETDATE()
                FROM    dbo.Parent
                WHERE   SomeID = @AnID;

        IF @Independent = 1
            COMMIT TRANSACTION;

    END TRY
    BEGIN CATCH

        IF @Independent = 1
            ROLLBACK TRANSACTION;

        THROW;

    END CATCH;

GO

The THROW is there so that any error can be passed up to the caller to be logged or otherwise handled. It also ensures that it fires any catch block in the calling code.

That’s the first half, the transaction handling if called independently, but it doesn’t help with the second requirement, that the changes made in the notification procedure roll back if there’s an error, but leave the changes made in the calling procedure unaffected and the transaction open. To do that, we need a savepoint.

From Books Online:

A user can set a savepoint, or marker, within a transaction. The savepoint defines a location to which a transaction can return if part of the transaction is conditionally canceled. If a transaction is rolled back to a savepoint, it must proceed to completion with more Transact-SQL statements if needed and a COMMIT TRANSACTION statement, or it must be canceled altogether by rolling the transaction back to its beginning.

It’s not a nested transaction, it’s not like Oracle’s autonomous transactions, it’s just a point within the transaction to which we can roll back to. It lets us undo the most recent part of a transaction.

ALTER PROCEDURE NotificationProc (@AnID INT)
AS
    DECLARE @Independent BIT = 0;
    IF @@TRANCOUNT = 0  -- proc is being called from outside of any transaction
        SET @Independent = 1;

    BEGIN TRY
        IF @Independent = 1
            BEGIN TRANSACTION;
        ELSE
            SAVE TRANSACTION Notifications; -- Define a savepoint which we can roll the transaction back to.

        UPDATE  dbo.Parent
        SET     NotificationRequired = 0
        WHERE   SomeID = @AnID;

        INSERT  INTO dbo.Notifications
                (ParentID,
                 NotificationText,
                 NotificationDate
                )
                SELECT  SomeID,
                        SomeRandomValue,
                        GETDATE()
                FROM    dbo.Parent
                WHERE   SomeID = @AnID;

        IF @Independent = 1
            COMMIT TRANSACTION;

    END TRY
    BEGIN CATCH

        IF @Independent = 1
            ROLLBACK TRANSACTION; -- roll back the entire transaction
        ELSE
            ROLLBACK TRANSACTION Notifications; -- roll back to the save point

        SELECT  ERROR_NUMBER(),
                ERROR_MESSAGE(),
                ERROR_LINE(); -- In reality, log this somewhere

    END CATCH;

GO

Looks good, let’s test. To test, I’m going to add a check constraint to the Notifications table that will be violated by the insert in the Notification procedure. This is to simulate the kind of errors that can happen in a real system (key violations, data type conversion failures, check constraint violations, etc). I’ll also add some sample data to the Parent table.

ALTER TABLE Notifications ADD CONSTRAINT Testing CHECK (NotificationDate > '2020-01-01')

INSERT INTO dbo.Parent (SomeRandomValue, NotificationRequired)
VALUES  ('abc',0), ('def',1),('ghi',0),('jkl',1),('mno',1);

First test, outside of a transaction. The requirement is that neither of the changes in the proc remain after the execution (because of the error)

SELECT SomeID, SomeRandomValue, NotificationRequired FROM dbo.Parent WHERE SomeID = 2;

EXEC dbo.NotificationProc @AnID = 2;

SELECT SomeID, SomeRandomValue, NotificationRequired FROM dbo.Parent WHERE SomeID = 2;
SELECT * FROM dbo.Notifications WHERE ParentID = 2;

IndependentTransaction

Success. When run independently the two data modifications were run in a transaction and when the error occurred, were rolled back.

Now let’s try from within an existing transaction. The changes made in the outer procedure (insert of a row into Child and update NotificationRequired to 1 in Parent must commit, but the changes made in the inner proc must not)

SELECT SomeID, SomeRandomValue, NotificationRequired FROM dbo.Parent WHERE SomeID = 1;

EXEC dbo.OuterProc @SomeValue = 'abc', @SomeOtherValue = 7;

SELECT SomeID, SomeRandomValue, NotificationRequired FROM dbo.Parent WHERE SomeID = 1;
SELECT SomeOtherID, ParentID, SomeValue FROM dbo.Child WHERE ParentID = 1;
SELECT * FROM dbo.Notifications WHERE ParentID = 1;

NestedTransaction

Again, exactly the desired behaviour. The changes made in the outer procedure were committed, the changes in the inner procedure, the procedure where the error was thrown, were rolled back.

Used correctly, savepoints can be a powerful mechanism for managing transactions in SQL Server. Unfortunately they’re not well known and as such their use can also make code much harder for later developers to debug.

The next part of the series on transactions will look at why (and why not) you’d want to name a transaction.

The full list is:

  1. A Mess of Nested Transactions
  2. Savepoints and conditional transactions (This post)
  3. Why would you want to name a transaction?
  4. When naming transactions causes an error

A Mess of Nested Transactions

Transactions are an area that I often find are used badly, or not used at all. Transactions without any error handling, nested transactions, named transactions named for no good reason or just plain lack of transactions where they should be used.

This is going to be the first of three posts looking at some aspects of transactions and transaction management. And deserving of the first post is that most misunderstood of things, the nested transaction.

Before getting into the details, I need to make one thing clear. Nested transactions are a lie. They do not exist in SQL Server.

A piece of code may appear to have multiple nested transactions in it. It may have multiple BEGIN TRANSACTION statement in it. It only has one transaction though.

An easy way to show this would be to look at the transaction log. Since transactions are there to make multiple statements atomic (committed or rolled back as a unit), the transactions must be reflected in the transaction log to allow SQL to know where to roll things back to or what statements need to be undone/redone as a unit.

CREATE TABLE TestingTransactions (
SomeNumber INT,
SomeString VARCHAR(50)
);
GO

INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
VALUES  (0, 'Zero');

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	VALUES  (1, 'One');

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		VALUES  (2, 'Two');

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			VALUES  (3, 'Three');

		COMMIT TRANSACTION

	COMMIT TRANSACTION

COMMIT TRANSACTION

SELECT Operation, Context, [Transaction ID], AllocUnitName, [Transaction Name]
FROM fn_dblog(NULL, NULL);

GO

I’m inserting a row before the transaction because allocating the first page to a table generates a lot of log entries. Those log records are not relevant to the subject at hand and will just confuse things.

One table, three transactions, three inserts, three commits. Let’s see what went into the transaction log.

TransactionLogNestedTransactions

What went into the log was a single begin transaction, a single commit and three inserts. The other two begin transaction statements, and the other two commit transaction statements were completely ignored for the purposes of the transaction log.

The other two begin transaction statements did do something. Not very much, and nothing that reflected in the transaction log, but they did do one thing, and if I make the code a little more complicated it’ll become clear what that one thing is.

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	OUTPUT inserted.*
	VALUES  (1, 'One');
	SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran1;

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		OUTPUT inserted.*
		VALUES  (2, 'Two');
		SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran2;

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			OUTPUT inserted.*
			VALUES  (3, 'Three');
			SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran3;

		COMMIT TRANSACTION
		SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran1;

	COMMIT TRANSACTION
	SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran2;

COMMIT TRANSACTION
SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran3;

TranCount

Each BEGIN TRANSACTION incremented the value returned by @@TranCount, each COMMIT TRANSACTION decremented the value returned by @@TranCount. If I looked at the transaction log after each statement, the BEGIN_XACT log entry corresponds to the BEGIN TRANSACTION which incremented @@Trancount from 0 to 1, and the COMMIT_XACT log entry corresponds to the COMMIT TRANSACTION which decremented @@Trancount from 1 to 0.

Only the operations which changed @@Trancount to or from 0 are reflected in the transaction log, they’re the only ones which have any effect on what becomes part of the database’s durable state.

Does ROLLBACK then follow the same rules? Does it get ignored unless it’s the last statement of multiple COMMIT/ROLLBACK statements?

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	OUTPUT inserted.*
	VALUES  (1, 'One');
	SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran1;

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		OUTPUT inserted.*
		VALUES  (2, 'Two')
		SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran2;

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			OUTPUT inserted.*
			VALUES  (3, 'Three')
			SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran3;

		ROLLBACK TRANSACTION
		SELECT @@TRANCOUNT AS TransactionCountAfterRollbackTran1;

	COMMIT TRANSACTION
	SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran2;

COMMIT TRANSACTION
SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran3;

SELECT Operation, Context, [Transaction ID], AllocUnitName, [Transaction Name]
FROM fn_dblog(NULL, NULL);

 

RollbackTransaction

TransactionLogRollbackTransaction

No. The ROLLBACK TRANSACTION set the value of @@Trancount directly to 0 and undid all of the inserts back to the outermost BEGIN TRANSACTION. The COMMIT TRANSACTION statements which ran after it all threw errors

Msg 3902, Level 16, State 1, Line 39
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.

This is the danger of nesting transactions, that one ROLLBACK, anywhere after the first BEGIN TRANSACTION will undo everything right the way back to that first BEGIN TRANSACTION. These are not autonomous transactions being run one inside another, there’s a single transaction and rollback anywhere will roll back everything.

This can be a debugging nightmare when the application creates a transaction and calls a procedure which begins a transaction which calls other procedures which begin and commit transactions. If something goes wrong in one procedure and that procedure rolls the transaction back, it can end up undoing a lot more work than the developer of that procedure may have expected.

In general I would recommend only starting and committing transactions at a single level, and making transaction handling part of the application’s design, not something that falls out of the implementation by accident. If there’s an intentional design around how transactions will be handled (and choosing not to have any is not a good idea), then it’s a lot less likely that there will be transaction-related bugs that need to be found and fixed.

And what happens when working with an existing app where this wasn’t done? I’ll look at that in the next post on transactions, on conditionally starting transactions and on the use of savepoints.

The full list is:

  1. A Mess of Nested Transactions (This post)
  2. Savepoints and conditional transactions
  3. Why would you want to name a transaction?
  4. When naming transactions causes an error

Of clustered indexes and ordering

There is a particularly irritating and persistent belief that indexes (usually it’s the clustered that gets picked on) are always physically ordered within the data file by the key columns. That is, that the data within the database file is always ordered by the key column.

It doesn’t help that official documentation states this ‘fact’.

I’m going to diverge from my usual methodology of first proving (or disproving) a statement and then explaining it in this case.

Do indexes (clustered or non-clustered) define the physical storage order of the rows?

No, absolutely not.

What indexes do is provide a logical ordering, a collection of pointers, that allow the storage engine to retrieve data from an index ordered by the index key, but that’s logical ordering, it specified nothing regarding the physical ordering.

The index structure is such that the page with key values 4, 5 and 6 will appear earlier in the index’s logical ordering than the page with key values 10,11 and 12. Where these pages are in the file is not defined at all. The page with key values 10,11 and 12 could be page 240 in the database file while the page with key values 4, 5 and 6 could be page 655.

On the data pages themselves there’s no guarantee that the row with the key value 4 will appear earlier on the page than the row with the key value of 6. 6 could be the first row on the page and 4 last and that would be just fine.

Let’s prove this. Time for DBCC page and some undocumented commands.

First up, the order of rows on the page. I’m going to create a table in a nice new database (so that there are no other tables around messing things up) and populate it with some data.

CREATE TABLE OddandEven (
SomeNumber INT,
Filler CHAR(500) DEFAULT ' '
) ;
GO

CREATE UNIQUE CLUSTERED INDEX idx_SomeNumber ON OddandEven (SomeNumber);
GO

INSERT INTO OddandEven (SomeNumber)
SELECT TOP (50) (ROW_NUMBER() OVER (ORDER BY object_id))*2 - 1 FROM sys.objects;

INSERT INTO OddandEven (SomeNumber)
SELECT TOP (50) (ROW_NUMBER() OVER (ORDER BY object_id))*2 FROM sys.objects;

So what I’m doing there is simply inserting 50 odd numbers first and 50 even numbers second

A quick check with DBCC IND shows me that page 89 of this database is a data page for this table. I’m going to use dump style 2 for DBCC Page, because I want a raw binary dump with no interpretation (I’m removing the portions that are just the filler, as that’s just intentionally wasted space)

000000000EB6AC50:   20020000 1000fb01 37332020 20202020 † …..û.73
000000000EB6AE40:   20202020 20202020 20202020 20202002 †               .
000000000EB6AE50:   00001000 fb013735 20202020 20202020 †….û.75
000000000EB6AE60:   20202020 20202020 20202020 20202020 †
000000000EB6B040:   20202020 20202020 20202020 20020000 †             …
000000000EB6B050:   1000fb01 36342020 20202020 20202020 †..û.64 
000000000EB6B060:   20202020 20202020 20202020 20202020 †
000000000EB6B240:   20202020 20202020 20202002 00001000 †           …..
000000000EB6B250:   fb013636 20202020 20202020 20202020 †û.66
000000000EB6B260:   20202020 20202020 20202020 20202020 †

Hmm… 73, 75, 64, 66. That’s not the correct physical ordering… What happened here is that I inserted the odd values first, they were written to the pages then when I wrote the even numbers the pages had to split (firstly) leaving them probably around 50% full, then the even numbers were added in the empty space. SQL doesn’t reorder the rows on the page (that would be expensive).

What keeps track of the logical ordering, what rows should be read first, second, etc. to get the results back in logical ordering, is the slot array at the end of the page

OFFSET TABLE:
Row - Offset
 14 (0xe) - 7236 (0x1c44)
 13 (0xd) - 3666 (0xe52)
 12 (0xc) - 6726 (0x1a46)
 11 (0xb) - 3156 (0xc54)
 10 (0xa) - 6216 (0x1848)
 9 (0x9) - 2646 (0xa56)
 8 (0x8) - 5706 (0x164a)
 7 (0x7) - 2136 (0x858)
 6 (0x6) - 1626 (0x65a)
 5 (0x5) - 5196 (0x144c)
 4 (0x4) - 1116 (0x45c)
 3 (0x3) - 4686 (0x124e)
 2 (0x2) - 606 (0x25e)
 1 (0x1) - 4176 (0x1050)
 0 (0x0) - 96 (0x60)

That tells me that the row with the lowest key value is found at offset 0x60, the next lowest at offset 0x1050, then 0x25e, etc. The rows are not stored on this page in physical order, the slot array defines the logical order so that anything needing the rows in logical order of the index, can read them off the page that way.

That answers the question about rows on a page. Let’s now look at whether pages are always stored in physical order within the data file.

I’m going to drop the OddandEven table and create a new table with the rows sized so that only a few rows fit onto a page.

CREATE TABLE PagePhysicalOrder (
  SomeNumber INT,
  Filler CHAR(800) DEFAULT ' '
);

CREATE UNIQUE CLUSTERED INDEX idx_TestingPhysicalOrder ON PagePhysicalOrder (SomeNumber)

DECLARE @i INT = 9;
WHILE @i &gt;= 0
  BEGIN
    INSERT INTO dbo.PagePhysicalOrder (SomeNumber, Filler)
    SELECT TOP (10)
      ROW_NUMBER() OVER (ORDER BY (SELECT 1)) +@i*10,''
      FROM sys.objects;

    SET @i = @i - 1;
  END

That gets me 100 rows in the table, written in groups of 10, with the higher values for SomeNumber being inserted first. Now, to find where the rows are stored, I’m going to use the sys.fn_PhysLocFormatter function and the %%physloc%% virtual column. See http://www.sqlskills.com/blogs/paul/sql-server-2008-new-undocumented-physical-row-locator-function/ for more details on these.

SELECT SomeNumber,
sys.fn_PhysLocFormatter(%%physloc%%) AS RowLocation
FROM dbo.PagePhysicalOrder

RowPhysicalLocations

The output of the PhysLocFormatter is FileID : Page Number : Slot Index. The output shows the rows with SomeNumber 75, 76, 77 and a few others are on page 197 while rows with a lower SomeNumber (65-70) are on page 248, further into the data file than the page containing the larger values of SomeNumber.

Hence we can say that the clustered index doesn’t enforce the physical order of the pages in the data file either.

The only thing that the clustered index (or nonclustered indexes) enforce is what values belong on a page together. If we have a table with an index on an integer column, we cannot have a situation where rows with a key value of 1, 2, 4, 8, 9 are on one page and rows with a key value of 3, 5, 6, 7 and 10 are on another. If only 5 rows fit onto a page, one page will have 1, 2, 3, 4 and 5 and another page will have 6, 7, 8, 9 and 10.  The physical order of the rows on those pages is irrelevant, as is the physical order of those two pages in the data file.

I suspect this myth came about because, when SQL creates or rebuilds an index, it will try as best as possible to put the pages of the index down in physical order of the index key. Doing so reduces logical fragmentation and allows the read-ahead reads to work as efficiently as possible. This applies only when the index is created, rebuilt or reorganised, not during regular operations.

Pass Summit 2015

It’s two weeks until Pass Summit 2015!

It feels a little weird this year, it’s the first year in quite some time where I’m not presenting in Seattle. The reason for that’s pretty simple, I didn’t submit any abstracts. It still feels strange to be going as an attendee, to be able to pick and chose sessions as I like, rather than having to work around the slots that I have to speak in, and to be able to spend time just wandering around the conference rather than rushing from place to place as is usual.

Still, that gives me a good opportunity to attend sessions I’m interested in. Looking over the schedule, my 5 must-attend sessions are (in no particular order)

Plus the keynote with Dr DeWitt and Dr Nehme is an event not to be missed!

Of course, there aren’t just the sessions. There’s the Birds of a Feather lunch, the Women in IT lunch, the SQL Clinic and the CAT team to ask difficult questions of, and the hundreds and thousands of ad-hoc conversations in the corridors, many of which may be more interesting than the sessions themselves. Not to mention the multitude of parties and events in the evenings.

Oh, and there will be Crazy Hat day as well. For those who don’t know what I’m taking about, it’s Argenis Without Borders, version 2. Please donate if you can, I really want to see Argenis dancing to music from a trombone.

Index selectivity and index scans

There was a question raised some time back ‘If an index is not selective, will the query operators that use it always be index scans’?

It’s an interesting question and requires a look at what’s going on behind the scenes in order to answer properly..

Short answer: No, not always.

Long answer…

Selectivity

Selectivity is a measure of what portion of the table satisfies a particular query predicate. The Microsoft whitepaper on statistics as used by the query optimiser defines selectivity as follows.

The fraction of rows from the input set of the predicate that satisfy the predicate. More sophisticated selectivity measures are also used to estimate the number of rows produced by joins, DISTINCT, and other operators.

Bart Duncan wrote a nice detailed blog post a while back explaining the difference between density, selectivity and cardinality. In summary, indexes have density, a measure of how unique the left-based column subsets within them are; predicates have selectivity, a measure of what portion of the table they affect; operators have cardinality, a measure of how many rows the operator processes.

Indexes cannot be said to be selective or not, they can only be said to have a high or low density. It is possible for a predicate on a very low density column (unique) to have a very poor selectivity (large percentage of the table affected) Imagine ID > 0 where ID is an int identity column. The column is unique, but the predicate affects the entire table. Low density (which is good), but poor selectivity.

So let’s alter the original question. “If an index has a high density (not very unique, lots of duplicate values), will query operators against it always be index scans rather than index seeks?”

Seek and Scan

Before we go on, I want to quickly look at the main difference between a seek operation and a scan operation.

A seek is an operation which navigates down the index’s b-tree looking for a row or for the start/end of a range of rows. A seek requires a predicate and that predicate must be of the form that can be used as a search argument (SARGable)

A scan is a read of part or all of the leaf level of an index.

High-density indexes

So what exactly is the problem with a high density index? In short, it returns a lot of rows for any predicate filters against it (unless there’s a TOP involved, but let’s ignore those cases here). If the index has a high density (and lets assume for simplicity there’s no data skew here), any predicate using that index automatically has a poor selectivity, it returns a large portion of the table.

If we take as an example a 100 000 row table, with an column called status that has 4 values only, then, assuming that the distribution of those values is equal, a query with a predicate searching for one of those values will read 25000 rows. If we have a nonclustered index on that integer column, it works out that the nonclustered index has 223 pages at the leaf level and is 2 levels deep in total. Given that the four values have equal distribution, an index seek to retrieve the rows for one of those status values will require approximately 57 pages to be read.

Is the index scan better?

The scan will read all the leaf pages, that’s what a scan does (ignoring cases like min, max and top where it can scan and read only part of the index). So if SQL decided to use an index scan because of the high density of the index it will have to read all 100 000 rows on all 223 pages (plus the index root page)

57 pages for the index seek vs 224 pages for the index scan. Looks pretty obvious which is better. To prove that I’m not making things up, let me test this and get actual numbers.

First the setup:

CREATE TABLE TestingIndexSeeks (
   Status INT,
   Filler CHAR(795) DEFAULT ''
);

INSERT INTO TestingIndexSeeks (Status)
SELECT NTILE(4) OVER (ORDER BY (SELECT 1)) AS Status FROM (
    SELECT TOP (100000) 1 AS Number FROM sys.columns a CROSS JOIN sys.columns b
) sub

CREATE NONCLUSTERED INDEX idx_Testing_Status ON dbo.TestingIndexSeeks (Status)

GO

Then the  test:

SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESEEK) WHERE Status = 3

SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESCAN) WHERE Status = 3

Statistics IO for the two queries:

Seek
Table ‘TestingIndexSeeks’. Scan count 1, logical reads 59, physical reads 0.

Scan
Table ‘TestingIndexSeeks’. Scan count 1, logical reads 225, physical reads 0.

Yup, index seek is better and the one that the optimiser choses if it is allowed to chose.

IndexSeek

High density indexes and the clustered index

So why the confusion around index scans on high density indexes? I suspect it’s because of the way the optimiser handles noncovering indexes where the predicates are not selective. This has nothing to do with the efficiency of the seek or scan operators on the nonclustered index though, it’s got to do with the mechanism used for the key lookup.

If a nonclustered index that SQL could use for a query is not covering, then for each row in that resultset it has to do a lookup back to the cluster/heap for the rest of the columns. Those key (or RID) lookups are expensive operations. If there are too many needed then the optimiser switches to a scan, not of the nonclustered index (it would be pointless, it’s still not covering), but of the clustered index because that at least has all the columns needed for the query (it could also switch to a scan of a different nonclustered index if there is one that’s covering but with columns in the wrong order to be seekable)

Summary

In summary, does having a high density nonclustered index result in index scans of that index? No (unless the predicate is not SARGable), however it can result in scans of a different index (probably the cluster) if the index is not covering for a query and that high density index being unused.

Repairing a replication subscriber

It’s a question which has come up a couple of times. If a subscriber of a transactional replication publication becomes corrupt, is running CheckDB with repair allow data loss safe?

The theory is, since the subscriber is a copy of another database, allowing CheckDB to discard data in the process of repairing won’t actually cause data loss (the data is still there in the publisher).

As with many things, it’s partially true, however there’s a little more to it that just that.

To see why, let’s set up some replication and intentionally corrupt some data in the subscriber and run a repair.

First, the source DB. I’m going to keep it really simple so that we can see the effects. A database with a single table called Customers, populated with 100 000 rows via SQLDataGenerator. The filler column simulates other columns that would be in a real table.

CREATE TABLE Customers (
CustomerID INT IDENTITY PRIMARY KEY,
FirstName VARCHAR(50),
Surname VARCHAR(50),
AccountNumber CHAR(15) UNIQUE,
AccountStatus VARCHAR(25),
Filler CHAR(200) DEFAULT ''
)

Once populated, we set up a transactional replication publication with default settings, and for simplicity have the subscriber be a second database on the same instance.  we query the table on the subscriber, we have exactly the same number of rows as the publisher has.

ReplicationSubscriber

To simulate a misbehaving IO subsystem, I’m going to take the subscription DB offline, open the mdf in a hex editor, scribble on top of a few pages in the user table, then bring the DB back online. A checkDB returns a couple of pages of errors ending with:

CHECKDB found 0 allocation errors and 16 consistency errors in table ‘Customers’ (object ID 421576540).

CHECKDB found 0 allocation errors and 18 consistency errors in database ‘ReplicationSubscriber’.

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (ReplicationSubscriber).

Minimum level to repair is REPAIR_ALLOW_DATA_LOSS and, based on the error messages, doing so will deallocate five pages. Let’s run the repair and then see what’s left in the table.

ReplicationSubscriber After Repair

We’ve lost 155 rows from the Customers table. The replication publisher still has them, but the subscriber now does not. So what happens if someone makes some data modifications on the publisher and changes one of the rows that was lost at the subscriber?

UPDATE dbo.Customers
SET AccountStatus = 'Closed'
WHERE CustomerID = 48700;
GO

INSERT INTO dbo.Customers
(FirstName,
Surname,
AccountNumber,
AccountStatus
)
VALUES  ('Mary',
'White',
'4985563214AN'
'Pending'
)

Give it a few seconds to replicate and then let’s check the table in the publisher. The new customer isn’t there… If we go and have a look at the replication, it’s throwing errors.

BrokenReplication

“The row was not found at the Subscriber when applying the replicated command. “, which is quite true, the row’s not there any longer.

What’s happened is that one of the rows updated on the publisher was lost when CheckDB repaired the database. The rows lost due to the repair weren’t automatically fetched from the publisher, there’s no mechanism in transactional replication for that to happen. Hence when the row was updated on the publisher, the update statement was replicated and the row on the subscriber couldn’t be found, causing the replication to fail, retry and fail again. It will keep failing until it is either reinitialised or the missing row is manually added to the subscriber.

What we needed to have done, to prevent this outcome, was after the CheckDB deallocated pages, to manually (or with something like SQLDataCompare) to sync the subscriber with the publisher and explicitly add back the rows which were on the deallocated pages. The primary key values must be kept the same, as that’s what replication uses to identify which rows in the subscriber need to be updated.

So to go back to the original question, yes it’s probably fine to run CheckDB with repair on a replication subscription, providing that the data deallocated as part of the repair is manually synced from the publisher afterwards. If it isn’t, the replication will break as soon as one of the rows affected by the repair is updated or deleted at the publisher.