Execution Plans

When a forced plan isn’t forced

One of the uses for the Query Store, added in SQL 2016, is to force plans. Once forced, plans are supposed to remain unchanged, however there are cases where a forced plan will not be applied and a new plan will be generated.

Statistics changes, which are one of the things that usually cause recompiles, don’t disable a forced plan. It would be kinda weird if it did and against the point of a forced plan.

Schema changes are another matter.

Let’s look at a couple of cases.

First, schema changes that make the plan invalid, in other words, schema changes that affect something that the plan explicitly references. There aren’t that many schema changes that can make the plan invalid without making the query invalid as well, but there are a couple. Index changes, for example.

I want to test a few things:

  • An index change that won’t make the plan invalid (eg adding a column)
  • An index change that does make the plan invalid (eg removing a column that the query needs)
  • Renaming the index without changing its definition
  • Adding an index that would be better for the query than the one referenced by the forced plan.

First, the setup. My Interstellar Trading database with an extra index added:

CREATE INDEX idx_ForcingTest on Shipments (ClientID, HasHazardous, HasLiveStock, HasTemperatureControlled)

The query I’ll be running to test is

DECLARE @Storage TABLE (ID INT, Priority TINYINT, CountShipments INT);

INSERT INTO @Storage
SELECT OriginStationID, Priority, COUNT(*) FROM Shipments WHERE ClientID = 17 AND HasHazardous = 1
GROUP BY OriginStationID, Priority
GO

It’s inserting into a table variable to prevent any problems with the resultsets in SSMS.

I’ve been running the query for a while, and its plan is forced.

image

First test:

CREATE INDEX idx_ForcingTest ON Shipments (ClientID, HasHazardous, HasLiveStock, HasTemperatureControlled, ReferenceNumber)
WITH (DROP_EXISTING = ON)

image

No change. Forced plan is still forced.

Now, let’s make that index less useful, by removing a column that the query does need. There’s a key lookup in the plan, so there is a way for the column to be obtained, but it would change what columns come from each operator and where the filters are being done. Same plan shape, but different details.

CREATE INDEX idx_ForcingTest ON Shipments (ClientID)
WITH (DROP_EXISTING = ON)

image

We get a new plan. The forced one is invalid, because the index no longer allows for the seek predicates defined in the plan, and so the forcing is ignored and we get a new plan.

The query still runs without error, which is better than we’d have had using the old USE PLAN hint.

Once I revert the index back to its original definition, the forced plan starts being used again.

How about renaming the index? Since the plan references the index by name, this will probably also cause the plan forcing to fail.

And indeed it does.

image

One last test. I’m going to rename the index back to its old name, and then add a new one that’s better for the query than the index referenced in the forced plan.

image

And we’re still using the forced plan. The addition of a new index did not invalidate the existing plan, and hence the forced plan will still be used, even when there’s a better index.

This is the reason why I recommend using plan forcing only to fix stuff that’s broken in prod, and to find a solution without forced plans for the long term. It’s not always possible but where it is I’d prefer not to leave the plan forcing in place, because it does mean that new indexes are not considered. Plus, if the query store is ever cleared, the forced plan (along with the forcing) are gone.

Obsessing over query operator costs

A common problem when looking at execution plans is attributing too much meaning and value of the costs of operators.

The percentages shown for operators in query plans are based on costs generated by the query optimiser. They are not times, they are not CPU usage, they are not IO.

The bigger problem is that they can be completely incorrect.

Before digging into the why of incorrect percentages, let’s take a step back and look at why those costs exist.

The SQL query optimiser is a cost-based optimiser. It generates good plans by estimating costs for each query operator and then trying to minimise the total cost of the plan. These costs are based on estimated row counts and heuristics.

The costs we see in the query plan are these compilation time cost estimates. They’re compilation-time estimations, which means that they won’t change between one execution of a query using a particular plan and another query using the same plan, even if the parameter values are different, even if the row counts through the operators are different.

Since the estimations are partially based on those row counts, that means that any time the query runs with row counts different to what were estimated, the costs will be wrong.

Let’s look at a quick example of that.

Cost1

There are no customers with an ID of 0, so the plan is generated with an estimation of one row being returned by the index seek, and one row looked up to the clustered index. Those are the only two operators that do any real work in that plan, and each is estimated to read and fetch just one row, so each gets an estimation of 50% of the cost of the entire query (0.0033 it be specific)

Run the same query with a different parameter value, plans are reused and so the costs are the same.

Cost2

That parameter returns 28 rows, the index seek is probably much the same cost, because one row or 28 continuous rows aren’t that different in work needed. The key lookup is a different matter. It’s a single-row seek always, so to look up 28 rows it has to execute 28 times, and hence do 28 times the work. It’s definitely no longer 50% of the work of executing the query.

The costs still show 50%, because they were generated for the 0-row case and displayed here. They’re not run-time costs, they’re compile time, tied to the plan.

Another thing can make the cost estimations inaccurate, and that’s incorrect costing calculations by the optimiser. Scalar user-defined functions are the easiest example there.

CostsOff

The first query there, the one that’s apparently 15% of the cost of the batch, runs in 3.2 seconds. The second runs in 270 ms.

The optimiser gives scalar UDFs a very low cost (they have their own plans, with costs in them though) and so the costs for the rest of the query and batch are meaningless.

The costs in a plan may give some idea what’s going on, but they’re not always correct, and should not be obsessed over, especially not when the plan’s a simple one with only a couple of operators. After all, the cost percentages add to 100% (usually).

Blocking operators and actual row counts

Query plans can sometimes be hard to read, and other times can be downright mystifying.

Take this plan for example. Not too hard in general. Two index seek/scan, a join, a sort and a filter. The peculiarity here is in the actual row counts. We expect that a join can filter rows out, that a filter can, well, filter rows out, that a top can reduce rows, that any aggregation can reduce the row count.

SortActualRowsBefore

SortActualRowsAfter

But why is a sort operator, a normal sort, reducing the row count? The answer lies in part not in how rows flow through the query plan, but in how control flows through the plan, and in part in the types of operators in the plan.

First let’s look at the types of operators. Here I don’t mean joins and aggregates and the like, I’m referring to whether an operator is a blocking operator or a non-blocking operator.

A non-blocking operator is one that consumes and produces rows at the same time. Nested loop joins are non-blocking operators.

A blocking operator is one that requires that all rows from the input have been consumed before a single row can be produced. Sorts are blocking operators.

Some operators can be somewhere between the two, requiring a group of rows to be consumed before an output row can be produced. Stream aggregates are an example here.

The sort in the plan is a blocking operator, and hence it needs all rows from the operator before it, the loop, before it can output any rows. That’s the 2920 going in to it, but why is there only 50 rows coming out?

That’s down to the way a query executes. Starting at the top of the plan, the top operator, in this case a SELECT asks the operator beneath it for a row. If the requested operator isn’t one that can generate a row (eg an index scan), then it asks the operator beneath it for a row.

The query that generated the shown plan had a filter based on the generated Row_Number of RowNumber between 26 and 50. This filter was executed by the Filter operator and partially by the Top operator.

RestOfPlan

FilterPropertiesTopProperties[3]

The TOP is there because the filter is on a Row_Number, the resultset is sorted by the columns defined in the Row_Number’s order by and there’s no partition by. The row numbered 50 will be the 50th row in the resultset and after that point there can be no more rows that satisfy the predicate. The query processor knows this.

So, the first row is requested by the select. The Filter can’t generate a row so it asks the Top for a row, and so on down the plan until we get to the sort.

The sort can’t request one row from the operator below it, it’s a blocking operator, it has to fetch all the rows from the operator below it. All 2920 of them.

Once the sort has all the rows, it sorts them and returns one row back to the previous operator. Repeat for the next row and the next.

Let’s fast-forward a few rows. The filter has just returned row 50 to the select operator. Select asks for the next row, row 51. The filter asks the top for the next row. The top, however, knows that it was only supposed to return the first 50 rows, and so instead it tells the filter operator that there are no more rows. The filter passes that up to the select and the query end there.

Hence why we have a sort further down the plan that only outputted 50 rows. Not because it filtered the rows itself, but because it was a blocking operator and the operators above it only asked for 50 rows.

It’s important to be able to read the execution plan in both directions. Reading the plan right-to-left is reading it in the direction of the data flow. Reading it left-to-right is reading it in the direction of the control flow. To fully understand plans it’s necessary to be able to do both.

Obsessing over query operator costs

A common problem when looking at execution plans is attributing too much meaning and value of the costs of operators.

The percentages shown for operators in query plans are based on costs generated by the query optimiser. They are not times, they are not CPU usage, they are not IO. The costs that the query optimiser generates are unit-less numbers that it uses internally to estimate the relative expense of plans as it optimises a query.

For starters, the percentages of operators should add to 100 across the entire plan, so worrying that the only data access operator in a simple query plan shows 100% is useless. If there’s only one index seek/scan in the plan, of course it’s going to be close to 100% of the total cost of the plan, there’s no where else for the cost to go.

But that’s not all. The accuracy of these estimates is based, in part, on the accuracy of the row estimations. If the statistics are out of date or there are any other row estimation errors, then the costs and as a result the percentages shown will be wildly incorrect

Take, for example, this query plan.

ExampleExecutionPlan

According to the percentages, the two operators were of equal cost. But a key lookup is just a single-row clustered index seek by a different name, and if we look at the number of executions of the two operators, it’s clear that they cannot possibly have been the same cost to execute

IndexSeek  KeyLookup

An index seek to return 1 million rows and a million index seeks to fetch one row each are not going to take the same amount of resources to execute, and hence those percentages are completely misleading.

Because the percentages can easily be way out, focusing on them when performance tuning is potentially going to result in a lot of wasted time. There is no single value, counter, measure or result that’s going to by itself indicate the cause of performance problems. Obsessing over single data points, or focusing on changing a single data point is almost certainly going to waste time.

Oh, and if anyone still wants to attribute importance to the percentages…

CostPercentages

Q&A from the DBA Fundamentals Virtual Chapter

A couple of weeks ago I did a presentation to the DBA Fundamentals virtual chapter. The presentation title was “What execution plans can tell you about query performance”

The slides and recording are available at the Virtual Chapter’s home page

I didn’t manage to get all of the questions answered, so here are a couple of slightly more involved questions which didn’t get answered.

Does the order of table matter when doing an inner join?

Short answer: No.

Long answer: Maybe, but it shouldn’t.

The optimiser decides which table is joined in which order. Putting a table first in the join clause does not mean it will be the first one processed. In general (as in, in ~99% of cases), put the tables in the join clause in the order which makes logical sense for the query.

Changing table order can, in some cases, change the plan. This doesn’t mean that SQL uses the order which the tables are specified in to determine the plan, it just means that changing the query resulted in the optimiser searching through the plan search space in a different way and finding a different ‘good enough’ plan. It’s not going to be deterministic and hence shouldn’t be relied on.

Will moving a filter from the WHERE to the INNER JOIN improve performance?

No, but again it can change the plan generated as described above. Personally I prefer joins in the JOIN clause and filters in the WHERE clause, because that’s what’s normal and expected.

Please note that moving filters from/to the WHERE clause from an OUTER JOIN changes the logic of the query and likely the results.

If multiple users are running the same query with different parameter values, will it result in different plans or recompiles?

Neither.

There will be one plan in cache (unless the SET options differ, but let’s ignore that for now). No matter what the parameter values are, when the same query is run, the plan will be fetched from cache and used.

Does index fragmentation have an effect on the join type chosen?

The Query Optimiser has no idea what logical fragmentation is. It doesn’t base its choices on how the pages are laid out in the data file. Logical fragmentation affects large range scans from disk, that’s all. If the pages are in memory, then fragmentation has no further effect.

Compiles and recompiles

I want to spend some time over the next few months looking at query compilation and the plan cache, and there’s a couple concepts that I want to get cleared up beforehand. The first of those is around two terms that are often used interchangeably, compile and recompile.

Compile

A compile occurs when a query is given to the query optimiser and, when it does a lookup into the plan cache, no matching plan is found. The optimism must then compile the query, generating an execution plan, must add that plan to the plan cache (in most cases) and then must pass that plan onto the query execution engine so that the query can be executed. (http://technet.microsoft.com/en-us/library/Cc966425)

Recompile

A recompile is something slightly different. For a recompile, the optimiser must find a matching plan when it queries the plan cache, must hand that cached plan over to the query execution engine and then while doing validation checks the execution engine must determine that then query plan is no longer valid and request the optimiser to partially or completely recompile the query. (http://technet.microsoft.com/en-us/library/Cc966425)

Subtle difference. Both cases result in the optimiser generating an execution plan, but the reasons can be different. Also worth noting is that a compile results in a new plan in the cache, a recompile simply replaces an existing plan.

Another difference since SQL 2005 – a compile is for the entire batch, but a recompile can be for just a single statement within the batch.


Now the theory’s dealt with, let’s look at some examples and see how we can track these two events and try and get a better understanding of which occurs when and how they look.

The tools I’m going to use to track these are performance monitor with the compiles/sec and recompiles/sec counters and SQL Profiler with the event SP:StmtRecompile event (there’s no profiler event for compilation). I’ll also check what’s in the plan cache after each test.

The first one’s going to be very simplistic, a query run against an empty plan cache.

DBCC FREEPROCCACHE
GO

EXEC dbo.OutStandingTotalByStatus
GO

What we get from that is a non-zero value for SQL Compilations/sec (perfmon) and the following from profiler (The SQL Recompiles/sec remains 0)

Compiles1

and the plan cache now contains one plan with one use. (for more info on how the CacheMiss and CacheInsert events work, see http://www.sqlinthewild.co.za/index.php/2010/07/27/hit-and-miss/ and http://www.sqlinthewild.co.za/index.php/2010/08/31/come-and-gone/)

Compiles2

In this case, I hope it was clear, we had a compile occur (empty plan cache before, new plan added to cache).

Now what happens if, with no clearing of the cache nor anything else being done, I mark that procedure for recompile and run it again?

(more…)

Capturing the Execution Plan

One last post on execution plans and Profiler (at least for now)

When trying to check a query’s execution plan, the display execution plan option of Management Studio is usually adequate, however there are occasions where it’s either not feasible to run the query from Management Studio or the particular behaviour can’t be reproduced in Management Studio (perhaps because of different SET options). For cases like this it may be necessary to capture the execution plans via SQL Trace. Fortunately there are events for execution plans. Eight of them, to be precise, all under the Performance folder.

PlanEvents

Great, so there’s no shortage of options available. But what are the differences between them?

Showplan All

According to Books Online:

The Showplan All event class occurs when Microsoft SQL Server executes an SQL statement. Include this event class to identify the Showplan operators on Microsoft SQL Server 2000 or Microsoft SQL Server 7.0. This event class will also work on SQL Server 2005 and later; however, the information included is a subset of the information available in the Showplan XML Statistics Profile or Showplan XML event class.

So in other words this one is not generally the event that we should be looking at on the latest versions of SQL. It’s not deprecated however (at least not in SQL 2008), so it is still usable on the later versions if you absolutely want.

Showplan

(more…)

Do IF statements cause recompiles?

I heard this one over at SSC a while back. “Avoid IF statements in stored procedures as they result in recompiles”

Ok, it sounds reasonable, if the optimiser optimises for the execution path taken on first execution it’ll have to go back and optimise the other paths when they are executed.

But is that really how it works? Now that I’ve spend some time looking at how the cache monitoring trace events behave, it’s possible to find out using those.

Let’s start with a simple example

CREATE TABLE Tbl1 (
  ID INT
);

CREATE TABLE Tbl2 (
  ID VARCHAR(10),
  SomeDate DATETIME
);
GO

CREATE PROCEDURE TestingRecompiles (@SomeParam INT)
AS
IF (@SomeParam = 1)
  SELECT ID FROM Tbl1;
ELSE
  SELECT SomeDate FROM Tbl2;
GO

Simple enough. First execution will be with the parameter value of 1. I’m going to use Profiler to see what’s happening. Events traced are SP:CacheInsert, T-SQL:StmtRecompile and the XML Plan for query compile, so I can see exactly what plan was generated. I’m using the ‘For Query Compile’ event so that I can catch the plan at optimisation time, not at execution time.

TraceEvents

(more…)

Recompiles

I’ve looked at cache hit and miss events and at the cache insert and remove events. The only cache-monitoring event (of current interest) left is the recompile event.

There are two recompile events available in Profiler in SQL 2008

  • SP:Recompile under Stored Procedures
  • SQL:StmtRecompile under T-SQL

Which to use when?

Books Online has the following to say on the two events.

The SQL:StmtRecompile event class indicates statement-level recompilations caused by all types of batches: stored procedures, triggers, ad hoc batches, and queries. Starting in SQL Server 2005, the SQL:StmtRecompile event class should be used instead of the SP:Recompile event class.

and

The SP:Recompile event class indicates that a stored procedure, trigger, or user-defined function has been recompiled. In SQL Server 2005 and later, recompilations reported by this event class occur at the statement level, whereas those in SQL Server 2000 occurred at the batch level.

In SQL Server 2005 and later, the preferred way to trace statement-level recompilations is to use the SQL:StmtRecompile event class. Starting in SQL Server 2005, the SP:Recompile event class is deprecated.

So it appears that they show the same thing and SP:Recompile is deprecated. That simplifies the entire situation, the only one that I’m going to look at in that case is the SQL:StmtRecompile event.

So what does the event look like?

RecompileEvents

(more…)

Come and gone

Or “Plan cache monitoring – insert and remove

Previously I took a look at the CacheHit and CacheMiss events to see how they behave and how to identify what’s been searched for in the cache. in this follow up, I want to take a similar look at the CacheInsert and CacheRemove events, see when they fire and how to identify the objects that they relate to.

Again, a word of caution, these can be frequently occurring events on busy servers and so traces should be kept short and to a minimum of events and columns. That said, these should occur a lot less often than the CacheHit and CacheMiss events. If they are occurring very frequently it may indicate that the SQL Server is not reusing plans efficiently.

CacheInsert

The CacheInsert event fires after a CacheMiss. The search for a matching plan in the cache failed, firing a CacheMiss event. Since there’s no plan, the optimiser is invoked to generate one and then that plan is inserted into the plan cache before the Query Execution engine begins execution.

The event is fairly simple, though of course there are a few surprises (what in SQL doesn’t have?).

Exec FireCacheEvents
GO

SELECT ID, SomeDate, Status
FROM TestingCacheEvents
WHERE Status =  'C'

CacheInsert

(more…)