What’s the best join type for a query? Should we aspire to seeing nested loop joins in all our queries? Should we tremble with horror at the sight of a hash join?
Well, it depends. 🙂
There’s no single join type that’s best in every scenario and there’s no join type that’s bad to have in every scenario. If one of the join types, say the much maligned hash join, was very much a sub-optimal join type in every single scenario, then there would be no reason for it to be in the product and no reason for the optimiser to ever select it for a plan. Since there are three join types, and the optimiser can and does use all three, we must assume that they are all useful under some circumstances.
I took a look at the joins a while back, but it’s worth revisiting.
The nested loop join
A nested loop join is an optimal join type when two conditions are true.
- One of the resultsets contains quite a small number of rows.
- The other table has an index on the join column(s).
When both of these are true, SQL can do a very efficient nested loop. The smaller resultset becomes the outer table of the join, a loop runs across all the rows in that resultset and index seeks are done to look up the matching rows in the inner table. It’s important to note that the number of seeks against the inner table will not be less than the number of rows in the outer table, at the point the join occurs
If the one resultset has a small number of rows but there is no index on the other table on the join column, then a loop join can still be done, but is less optimal as the entire of the inner table (or a subset based on another filter condition) must be read on each iteration of the loop.
If both resultsets have large numbers of rows but there is an index on the join columns in one of the tables then the nested loop can still read through one of the resultsets and do index seeks to locate matching rows, but the number of rows in the outer table will mean lots and lots of seek operations, which may result in a sub-optimal plan.
If both resultsets have large numbers of rows and there are no indexes on the join column(s) then the nested loop becomes a very sub-optimal join as the large number of rows means that the inner table will have to be read many, many times and the lack of indexes means that whichever table is picked as the inner table will have to be read in its entirety each time.
Bottom line: Forcing a nested loop join on queries that process larger numbers of rows can result in very poor performance, especially if the join column(s) are not indexed.
So let’s have a look at a couple examples. Table creation code
CREATE TABLE JoinTable1 ( ID INT IDENTITY PRIMARY KEY, SomeString VARCHAR(4), RandomDate DATETIME ) CREATE TABLE JoinTable2 ( JoinString VARCHAR(4) PRIMARY KEY, Filler CHAR(100) ) CREATE INDEX idx_Test ON JoinTable1 (SomeString, ID)
I populated these with RedGate’s Data Generator, so I have no generation code. Shouldn’t be hard to create, 1 million rows in the 1st table, matching distinct values of the string column (512 of them in my case) in the 2nd table
If I join the two and have a filter that limits the resultset to a small number of rows, I get a loop join
SELECT SomeString FROM jointable1 j1 INNER JOIN jointable2 j2 ON j1.somestring = j2.joinstring WHERE ID BETWEEN 0 AND 25
Table ‘JoinTable2’. Scan count 0, logical reads 50, physical reads 0.
Table ‘JoinTable1’. Scan count 1, logical reads 3, physical reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Great, now let’s up the rows involved.
SELECT SomeString FROM jointable1 j1 INNER JOIN jointable2 j2 ON j1.somestring = j2.joinstring WHERE ID BETWEEN 0 AND 5000
This does a merge join by default (there are indexes that can be used to retrieve rows sorted on both sides). As a merge join the stats look like this:
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0.
Table ‘JoinTable1’. Scan count 1, logical reads 21, physical reads 0.
Table ‘JoinTable2’. Scan count 1, logical reads 25, physical reads 0.
SQL Server Execution Times:
CPU time = 15 ms, elapsed time = 209 ms.
Hmm… much higher duration than the first case. What if I force a loop join?
SELECT SomeString FROM jointable1 j1 INNER JOIN jointable2 j2 ON j1.somestring = j2.joinstring WHERE ID BETWEEN 0 AND 5000 OPTION (LOOP JOIN)
Table ‘JoinTable2’. Scan count 0, logical reads 10321, physical reads 0.
Table ‘JoinTable1’. Scan count 1, logical reads 21, physical reads 0.
SQL Server Execution Times:
CPU time = 47 ms, elapsed time = 220 ms.
Loop is does not appear to be the optimal join type any more.
The merge join
The merge join requires that both resultsets are sorted by the join columns. It’s a join type that scales well with the size of the result sets. The cost of the join comes when one or both resultsets have to be sorted before the join can be done. When both resultsets are already sorted by the join key (probably because of the index used to retrieve the data) then the merge join is quite an optimal join regardless of the size of the rows. When the smaller of the two resultsets needs to be sorted before the join a merge can still be relatively efficient. When both resultsets need sorting it is likely that a different join will be more optimal.
Let’s start with the same example as from the first one (with even more rows, in fact, all the rows)
SELECT ID, SomeString FROM jointable1 j1 INNER JOIN jointable2 j2 ON j1.somestring = j2.joinstring
This by default results in a merge join, a scan of the nonclustered index on the large table and of the cluster on the smaller table.
Table ‘JoinTable1’. Scan count 1, logical reads 2109, physical reads 0.
Table ‘JoinTable2’. Scan count 1, logical reads 19, physical reads 0.
SQL Server Execution Times:
CPU time = 1201 ms, elapsed time = 11458 ms.
Most of the elapsed time is the time required to display the results. If I now go and change the order of columns in the nonclustered index, it can no longer be used to get the data in the correct order for the join. The index is still the same size and should still have the same scan cost. We now get the semi-expected hash join.
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0.
Table ‘JoinTable1’. Scan count 1, logical reads 2109, physical reads 0.
Table ‘JoinTable2’. Scan count 1, logical reads 25, physical reads 0.
SQL Server Execution Times:
CPU time = 1685 ms, elapsed time = 11726 ms.
Ok, now what if I forced it back to a merge join? SQL would have to first sort a 1 million row table, then do the join.
Table ‘JoinTable1’. Scan count 1, logical reads 2109, physical reads 0.
Table ‘JoinTable2’. Scan count 1, logical reads 19, physical reads 0.
SQL Server Execution Times:
CPU time = 6661 ms, elapsed time = 21944 ms.
IOs are unchanged, it’s still just a single scan of each involved index, but look at that CPU cost.
One additional point about Merge join is that there must be at lest one equi-join present. If the only join conditions between two tables are inequalities than merge is not a valid join type and an attempt to force a merge join will result in an error.
The hash join
This join type seems to be the one most hated. So often people are horrified by the appearance of a hash join in a query plan. “Should I force a loop join?” is asked.
Well, probably not.
A hash join is the heavy lifter of the join types. It’s the one that operates efficiently on huge, unsorted resultsets and it’s the one that parallels best. Remember that a loop join is efficient if one of the resultsets has a small number of rows and merge joins are efficient if both resultsets are already sorted on the join columns(s). When neither of those joins prove optimal, that’s when the hash join is used.
Sometimes the presence of a hash join suggests that there are missing indexes or that more rows are being processed than necessary. Other times it simply means that the numbers of rows involved in the query rules out the other two join types.
I’m not going to do any examples here because the hash join is the one join type that doesn’t perform terribly if called on smaller resultsets. It still works, and fairly well, it’s just that the other joins may be better in those cases.
Like with the merge join, the hash requires at least one equi-join to work and an attempt to force a hash join when there are only inequality join predicates will result in an error.
So next time someone asks ‘Is a hash join bad?’, the correct answer to give them is ‘It depends, how many rows is it joining?’
Hi:
I have a question on indexing one of your examples.
For instance if we take this example from up above…..
1. SELECT SomeString
2. FROM jointable1 j1
3. INNER JOIN jointable2 j2 ON j1.somestring = j2.joinstring
4. WHERE ID BETWEEN 0 AND 5000
5. OPTION (LOOP JOIN)
Where would you start the index in the where clause or the join clause?
Depends. Both can work. In this specific case, the where clause refers to joinTable1, which becomes the outer table of the join, so an index on the join column won’t be used. If it was the inner table, that index would be very useful.
In a real-life situation, I’d probably test both options (ID, someString) and (Somestring, ID) and see which one produced the more optimal performance.
Go have a look at some of my indexing posts (if you haven’t), specifically the two on selectivity and column order
Wow, I did not know that you would respond so soon.
Thanks and I will check out the rest of your index posts soon. 😀
Great post!
One thing I understand from here is it is better to let the query optimizer decide what method should be used, rather than forcing a join type using any hints.
If my understanding is correct, I wonder why there should be so much complicated literature elsewhere (including the MSDN) about explicit hints.
Pretty much, yes in most cases.
There are times that you will want to overule the optimiser, the trick is to know when those times are and to have full understanding of why you’re overruling it and what the effects will be.
Great article, again…keep them coming
i need to confirm if I am right or wrong
In my Database, the joins in source tables are based on Primary key, so I always have an index to help for joining.
I tried forcing the loop join as USE_NL_WITH_INDEX and it helps most of the time, even if the number of rows is high almost 1 lac records per table.
So is it correct to conclude and that Nested loop with index are most helpful if the join columns have unique index or in other words the joins are based on Primary Key – Foreign Key relationship.
Not necessarily. I can trivially generate an example where there’s a foreign key constraint and the most optimal join is hash or merge.
Leave the join choice to SQL Server unless you know absolutely why the join you’re forcing is better, know that it’s better under all circumstances and test frequently to ensure it’s still the best join type.