patternsqlMinor
how come a table variable is improving the performance of a query in this circumstance?
Viewed 0 times
thisthecomecircumstancetablequeryperformancehowvariableimproving
Problem
for this specific case, that I will try to explain below, using a table variable is performing better than not using a table variable.
I would like to know why, and if possible, get rid of the table variable.
this is the query using the table variable:
this produces the following stats:
```
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 78 ms, elapsed time = 86 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table '#BF0B2154'. Scan count 0, logical reads 1957, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBReturnHistory'. Scan count 1, logical reads 13, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 9 ms
I would like to know why, and if possible, get rid of the table variable.
this is the query using the table variable:
USE [BISource_UAT]
GO
set statistics io on
SET STATISTICS TIME ON
SET NOCOUNT ON;
DECLARE @OrderStartDate DATETIME = '15-feb-2015'
DECLARE @OrderEndDate DATETIME = '28-feb-2016'
DECLARE @tmp TABLE
(
strBxOrderNo VARCHAR(20)
,sintReturnId INT
)
INSERT INTO @tmp
SELECT strBxOrderNo
,sintReturnId
FROM TABLEBACKUPS.dbo.tblBReturnHistory rh
WHERE rh.sintReturnStatusId in ( 3 )
AND rh.dtmAdded >= @OrderStartDate
AND rh.dtmAdded 0
AND o.sdtmOrdCreated >= @OrderStartDatethis produces the following stats:
```
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 78 ms, elapsed time = 86 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table '#BF0B2154'. Scan count 0, logical reads 1957, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'tblBReturnHistory'. Scan count 1, logical reads 13, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 9 ms
Solution
The main factors in play here are:
The cardinality estimate for a table variable is 1 row (unless a statement level recompile occurs, or trace flag 2453 is active). This low estimate results in a very low-cost plan, featuring a navigational strategy based on nested loops. This plan may continue to be effective for relatively low row counts, especially if the data needed does not need to be read in from persistent storage.
With more accurate cardinality estimates, the optimizer favours a plan using hash joins and a few scans. This appears to be cheaper than a navigational strategy, given the assumptions listed above; especially concerning the cold cache, and the relatively low cost of a sequential scan compared with many seeks (assuming a largely random I/O pattern).
The table variable plan may be slower than the alternative if the data needed is not in memory - or it may not. The cost model is exactly that - a model - the exact numbers used may not be representative of your hardware and configuration, and the assumptions made may not be valid in the particular circumstances.
All these caveats apply especially to low cost queries (which both are) since small cost changes can produce very different plan shapes. In fact, both plans are successful in that they produce results quickly and efficiently enough.
- The optimizer does not try to find the best plan; its goal is to find a reasonable plan quickly
- It assumes the query will be run with a cold cache
- The cost model used favours sequential I/O over random I/O
- Repeated seeks into an index are assumed to be randomly distributed
The cardinality estimate for a table variable is 1 row (unless a statement level recompile occurs, or trace flag 2453 is active). This low estimate results in a very low-cost plan, featuring a navigational strategy based on nested loops. This plan may continue to be effective for relatively low row counts, especially if the data needed does not need to be read in from persistent storage.
With more accurate cardinality estimates, the optimizer favours a plan using hash joins and a few scans. This appears to be cheaper than a navigational strategy, given the assumptions listed above; especially concerning the cold cache, and the relatively low cost of a sequential scan compared with many seeks (assuming a largely random I/O pattern).
The table variable plan may be slower than the alternative if the data needed is not in memory - or it may not. The cost model is exactly that - a model - the exact numbers used may not be representative of your hardware and configuration, and the assumptions made may not be valid in the particular circumstances.
All these caveats apply especially to low cost queries (which both are) since small cost changes can produce very different plan shapes. In fact, both plans are successful in that they produce results quickly and efficiently enough.
Context
StackExchange Database Administrators Q#134987, answer score: 9
Revisions (0)
No revisions yet.